Tech-invite3GPPspaceIETFspace
959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 8049

YANG Data Model for L3VPN Service Delivery

Pages: 157
Obsoleted by:  8299
Part 4 of 6 – Pages 66 to 92
First   Prev   Next

Top   noToC   RFC8049 - Page 66   prevText

6.9. Security

The "security" container defines customer-specific security parameters for the site. The security options supported in the model are limited but may be extended via augmentation.
Top   noToC   RFC8049 - Page 67

6.9.1. Authentication

The current model does not support any authentication parameters for the site connection, but such parameters may be added in the "authentication" container through augmentation.

6.9.2. Encryption

Traffic encryption can be requested on the connection. It may be performed at Layer 2 or Layer 3 by selecting the appropriate enumeration in the "layer" leaf. For example, an SP may use IPsec when a customer requests Layer 3 encryption. The encryption profile can be SP defined or customer specific. When an SP profile is used and a key (e.g., a pre-shared key) is allocated by the provider to be used by a customer, the SP should provide a way to communicate the key in a secured way to the customer. When a customer profile is used, the model supports only a pre-shared key for authentication, with the pre-shared key provided through the NETCONF or RESTCONF request. A secure channel must be used to ensure that the pre-shared key cannot be intercepted. For security reasons, it may be necessary for the customer to change the pre-shared key on a regular basis. To perform a key change, the user can ask the SP to change the pre-shared key by submitting a new pre-shared key for the site configuration (as shown below). This mechanism might not be hitless. <site> <site-id>SITE1</site-id> <site-network-accesses> <site-network-access> <site-network-access-id>1</site-network-access-id> <security> <encryption-profile> <preshared-key>MY_NEW_KEY</preshared-key> </encryption-profile> </security> </site-network-access> </site-network-accesses> </site>
Top   noToC   RFC8049 - Page 68
   A hitless key-change mechanism may be added through augmentation.

   Other key-management methodologies may be added through augmentation.
   A "pki" container, which is empty, has been created to help with
   support of PKI through augmentation.

6.10. Management

The model proposes three types of common management options: o provider-managed: The CE router is managed only by the provider. In this model, the responsibility boundary between the SP and the customer is between the CE and the customer network. o customer-managed: The CE router is managed only by the customer. In this model, the responsibility boundary between the SP and the customer is between the PE and the CE. o co-managed: The CE router is primarily managed by the provider; in addition, the SP allows customers to access the CE for configuration/monitoring purposes. In the co-managed mode, the responsibility boundary is the same as the responsibility boundary for the provider-managed model. Based on the management model, different security options MAY be derived. In the co-managed case, the model proposes some options to define the management address family (IPv4 or IPv6) and the associated management address.

6.11. Routing Protocols

"routing-protocol" defines which routing protocol must be activated between the provider and the customer router. The current model supports the following settings: bgp, rip, ospf, static, direct, and vrrp. The routing protocol defined applies at the provider-to-customer boundary. Depending on how the management model is administered, it may apply to the PE-CE boundary or the CE-to-customer boundary. In the case of a customer-managed site, the routing protocol defined will be activated between the PE and the CE router managed by the customer. In the case of a provider-managed site, the routing protocol defined will be activated between the CE managed by the SP and the router or LAN belonging to the customer. In this case, we expect the PE-CE routing to be configured based on the SP's rules, as both are managed by the same entity.
Top   noToC   RFC8049 - Page 69
                               Rtg protocol
       192.0.2.0/24 ----- CE ----------------- PE1

                    Customer-managed site

             Rtg protocol
       Customer router ----- CE ----------------- PE1

                    Provider-managed site

   All the examples below will refer to a scenario for a customer-
   managed site.

6.11.1. Handling of Dual Stack

All routing protocol types support dual stack by using the "address-family" leaf-list. Example of dual stack using the same routing protocol: <routing-protocols> <routing-protocol> <type>static</type> <static> <address-family>ipv4</address-family> <address-family>ipv6</address-family> </static> </routing-protocol> </routing-protocols> Example of dual stack using two different routing protocols: <routing-protocols> <routing-protocol> <type>rip</type> <rip> <address-family>ipv4</address-family> </rip> </routing-protocol> <routing-protocol> <type>ospf</type> <ospf> <address-family>ipv6</address-family> </ospf> </routing-protocol> </routing-protocols>
Top   noToC   RFC8049 - Page 70

6.11.2. LAN Directly Connected to SP Network

The routing protocol type "direct" SHOULD be used when a customer LAN is directly connected to the provider network and must be advertised in the IP VPN. LAN attached directly to provider network: 192.0.2.0/24 ----- PE1 In this case, the customer has a default route to the PE address.

6.11.3. LAN Directly Connected to SP Network with Redundancy

The routing protocol type "vrrp" SHOULD be used and advertised in the IP VPN when o the customer LAN is directly connected to the provider network, and o LAN redundancy is expected. LAN attached directly to provider network with LAN redundancy: 192.0.2.0/24 ------ PE1 | +--- PE2 In this case, the customer has a default route to the SP network.

6.11.4. Static Routing

The routing protocol type "static" MAY be used when a customer LAN is connected to the provider network through a CE router and must be advertised in the IP VPN. In this case, the static routes give next hops (nh) to the CE and to the PE. The customer has a default route to the SP network. Static rtg 192.0.2.0/24 ------ CE -------------- PE | | | Static route 192.0.2.0/24 nh CE Static route 0.0.0.0/0 nh PE
Top   noToC   RFC8049 - Page 71

6.11.5. RIP Routing

The routing protocol type "rip" MAY be used when a customer LAN is connected to the provider network through a CE router and must be advertised in the IP VPN. For IPv4, the model assumes that RIP version 2 is used. In the case of dual-stack routing requested through this model, the management system will be responsible for configuring RIP (including the correct version number) and associated address families on network elements. RIP rtg 192.0.2.0/24 ------ CE -------------- PE

6.11.6. OSPF Routing

The routing protocol type "ospf" MAY be used when a customer LAN is connected to the provider network through a CE router and must be advertised in the IP VPN. It can be used to extend an existing OSPF network and interconnect different areas. See [RFC4577] for more details. +---------------------+ | | OSPF | | OSPF area 1 | | area 2 (OSPF | | (OSPF area 1) --- CE ---------- PE PE ----- CE --- area 2) | | +---------------------+ The model also proposes an option to create an OSPF sham link between two sites sharing the same area and having a backdoor link. The sham link is created by referencing the target site sharing the same OSPF area. The management system will be responsible for checking to see if there is already a sham link configured for this VPN and area between the same pair of PEs. If there is no existing sham link, the management system will provision one. This sham link MAY be reused by other sites.
Top   noToC   RFC8049 - Page 72
                           +------------------------+
                           |                        |
                           |                        |
                           |  PE (--sham link--)PE  |
                           |    |                |  |
                           +----|----------------|--+
                                | OSPF area 1    | OSPF area 1
                                |                |
                                CE1             CE2
                                |                |
                          (OSPF area 1)     (OSPF area 1)
                                |                |
                                +----------------+

   Regarding dual-stack support, the user MAY specify both IPv4 and IPv6
   address families, if both protocols should be routed through OSPF.
   As OSPF uses separate protocol instances for IPv4 and IPv6, the
   management system will need to configure both OSPF version 2 and OSPF
   version 3 on the PE-CE link.

   Example of OSPF routing parameters in the service model:

   <routing-protocols>
     <routing-protocol>
       <type>ospf</type>
       <ospf>
           <area-address>0.0.0.1</area-address>
           <address-family>ipv4</address-family>
           <address-family>ipv6</address-family>
       </ospf>
     </routing-protocol>
   </routing-protocols>

   Example of PE configuration done by the management system:

   router ospf 10
    area 0.0.0.1
     interface Ethernet0/0
   !
   router ospfv3 10
    area 0.0.0.1
     interface Ethernet0/0
    !
Top   noToC   RFC8049 - Page 73

6.11.7. BGP Routing

The routing protocol type "bgp" MAY be used when a customer LAN is connected to the provider network through a CE router and must be advertised in the IP VPN. BGP rtg 192.0.2.0/24 ------ CE -------------- PE The session addressing will be derived from connection parameters as well as the SP's knowledge of the addressing plan that is in use. In the case of dual-stack access, the user MAY request BGP routing for both IPv4 and IPv6 by specifying both address families. It will be up to the SP and management system to determine how to decline the configuration (two BGP sessions, single, multi-session, etc.). The service configuration below activates BGP on the PE-CE link for both IPv4 and IPv6. BGP activation requires the SP to know the address of the customer peer. The "static-address" allocation type for the IP connection MUST be used. <routing-protocols> <routing-protocol> <type>bgp</type> <bgp> <autonomous-system>65000</autonomous-system> <address-family>ipv4</address-family> <address-family>ipv6</address-family> </bgp> </routing-protocol> </routing-protocols>
Top   noToC   RFC8049 - Page 74
   Depending on the SP flavor, a management system can divide this
   service configuration into different flavors, as shown by the
   following examples.

   Example of PE configuration done by the management system
   (single IPv4 transport session):

   router bgp 100
    neighbor 203.0.113.2 remote-as 65000
    address-family ipv4 vrf Cust1
       neighbor 203.0.113.2 activate
    address-family ipv6 vrf Cust1
       neighbor 203.0.113.2 activate
       neighbor 203.0.113.2 route-map SET-NH-IPV6 out

   Example of PE configuration done by the management system
   (two sessions):

   router bgp 100
    neighbor 203.0.113.2 remote-as 65000
    neighbor 2001::2 remote-as 65000
    address-family ipv4 vrf Cust1
       neighbor 203.0.113.2 activate
    address-family ipv6 vrf Cust1
       neighbor 2001::2 activate

   Example of PE configuration done by the management system
   (multi-session):

   router bgp 100
    neighbor 203.0.113.2 remote-as 65000
    neighbor 203.0.113.2 multisession per-af
    address-family ipv4 vrf Cust1
       neighbor 203.0.113.2 activate
    address-family ipv6 vrf Cust1
       neighbor 203.0.113.2 activate
       neighbor 203.0.113.2 route-map SET-NH-IPV6 out
Top   noToC   RFC8049 - Page 75

6.12. Service

The service defines service parameters associated with the site.

6.12.1. Bandwidth

The service bandwidth refers to the bandwidth requirement between the PE and the CE (WAN link bandwidth). The requested bandwidth is expressed as svc-input-bandwidth and svc-output-bandwidth in bits per second. The input/output direction uses the customer site as a reference: "input bandwidth" means download bandwidth for the site, and "output bandwidth" means upload bandwidth for the site. The service bandwidth is only configurable at the site-network-access level. Using a different input and output bandwidth will allow the SP to determine if the customer allows for asymmetric bandwidth access, such as ADSL. It can also be used to set rate-limiting in a different way for uploading and downloading on a symmetric bandwidth access. The bandwidth is a service bandwidth expressed primarily as IP bandwidth, but if the customer enables MPLS for Carriers' Carriers (CsC), this becomes MPLS bandwidth.

6.12.2. QoS

The model proposes to define QoS parameters in an abstracted way: o qos-classification-policy: policy that defines a set of ordered rules to classify customer traffic. o qos-profile: QoS scheduling profile to be applied.
6.12.2.1. QoS Classification
QoS classification rules are handled by the "qos-classification-policy" container. The qos-classification-policy container is an ordered list of rules that match a flow or application and set the appropriate target class of service (target-class-id). The user can define the match using an application reference or a flow definition that is more specific (e.g., based on Layer 3 source and destination addresses, Layer 4 ports, and Layer 4 protocol). When a flow definition is used, the user can employ a "target-sites" leaf-list to identify the destination of a flow rather than using destination IP addresses. In such a case, an association between the site abstraction and the IP
Top   noToC   RFC8049 - Page 76
   addresses used by this site must be done dynamically.  How this
   association is done is out of scope for this document; an
   implementation might not support this criterion and should advertise
   a deviation in this case.  A rule that does not have a match
   statement is considered a match-all rule.  An SP may implement a
   default terminal classification rule if the customer does not provide
   it.  It will be up to the SP to determine its default target class.
   The current model defines some applications, but new application
   identities may be added through augmentation.  The exact meaning of
   each application identity is up to the SP, so it will be necessary
   for the SP to advise the customer on the usage of application
   matching.

   Where the classification is done depends on the SP's implementation
   of the service, but classification concerns the flow coming from the
   customer site and entering the network.

                                  Provider network
                             +-----------------------+
      192.0.2.0/24
   198.51.100.0/24 ---- CE --------- PE

     Traffic flow
    ---------->

   In the figure above, the management system should implement the
   classification rule:

   o  in the ingress direction on the PE interface, if the CE is
      customer-managed.

   o  in the ingress direction on the CE interface connected to the
      customer LAN, if the CE is provider-managed.
Top   noToC   RFC8049 - Page 77
   The figure below describes a sample service description of QoS
   classification for a site:

   <service>
     <qos>
       <qos-classification-policy>
         <rule>
           <id>1</id>
           <match-flow>
             <ipv4-src-prefix>192.0.2.0/24</ipv4-src-prefix>
             <ipv4-dst-prefix>203.0.113.1/32</ipv4-dst-prefix>
             <l4-dst-port>80</l4-dst-port>
             <l4-protocol>tcp</l4-protocol>
           </match-flow>
           <target-class-id>DATA2</target-class-id>
         </rule>
         <rule>
           <id>2</id>
           <match-flow>
             <ipv4-src-prefix>192.0.2.0/24</ipv4-src-prefix>
             <ipv4-dst-prefix>203.0.113.1/32</ipv4-dst-prefix>
             <l4-dst-port>21</l4-dst-port>
             <l4-protocol>tcp</l4-protocol>
           </match-flow>
           <target-class-id>DATA2</target-class-id>
         </rule>
         <rule>
           <id>3</id>
           <match-application>p2p</match-application>
           <target-class-id>DATA3</target-class-id>
         </rule>
         <rule>
           <id>4</id>
           <target-class-id>DATA1</target-class-id>
         </rule>
       </qos-classification-policy>
     </qos>
   </service>

   In the example above:

   o  HTTP traffic from the 192.0.2.0/24 LAN destined for 203.0.113.1/32
      will be classified in DATA2.

   o  FTP traffic from the 192.0.2.0/24 LAN destined for 203.0.113.1/32
      will be classified in DATA2.
Top   noToC   RFC8049 - Page 78
   o  Peer-to-peer traffic will be classified in DATA3.

   o  All other traffic will be classified in DATA1.

   The order of rules is very important.  The management system
   responsible for translating those rules in network element
   configuration MUST keep the same processing order in network element
   configuration.  The order of rules is defined by the "id" leaf.  The
   lowest id MUST be processed first.

6.12.2.2. QoS Profile
The user can choose either a standard profile provided by the operator or a custom profile. The "qos-profile" container defines the traffic-scheduling policy to be used by the SP. Provider network +-----------------------+ 192.0.2.0/24 198.51.100.0/24 ---- CE --------- PE \ / qos-profile In the case of a provider-managed or co-managed connection, the provider should ensure scheduling according to the requested policy in both traffic directions (SP to customer and customer to SP). As an example, a device-scheduling policy may be implemented on both the PE side and the CE side of the WAN link. In the case of a customer- managed connection, the provider is only responsible for ensuring scheduling from the SP network to the customer site. As an example, a device-scheduling policy may be implemented only on the PE side of the WAN link towards the customer. A custom QoS profile is defined as a list of classes of services and associated properties. The properties are: o rate-limit: used to rate-limit the class of service. The value is expressed as a percentage of the global service bandwidth. When the qos-profile container is implemented on the CE side, svc-output-bandwidth is taken into account as a reference. When it is implemented on the PE side, svc-input-bandwidth is used. o latency: used to define the latency constraint of the class. The latency constraint can be expressed as the lowest possible latency or a latency boundary expressed in milliseconds. How this latency constraint will be fulfilled is up to the SP's implementation of
Top   noToC   RFC8049 - Page 79
      the service: a strict priority queuing may be used on the access
      and in the core network, and/or a low-latency routing
      configuration may be created for this traffic class.

   o  jitter: used to define the jitter constraint of the class.  The
      jitter constraint can be expressed as the lowest possible jitter
      or a jitter boundary expressed in microseconds.  How this jitter
      constraint will be fulfilled is up to the SP's implementation of
      the service: a strict priority queuing may be used on the access
      and in the core network, and/or a jitter-aware routing
      configuration may be created for this traffic class.

   o  bandwidth: used to define a guaranteed amount of bandwidth for the
      class of service.  It is expressed as a percentage.  The
      "guaranteed-bw-percent" parameter uses available bandwidth as a
      reference.  When the qos-profile container is implemented on the
      CE side, svc-output-bandwidth is taken into account as a
      reference.  When it is implemented on the PE side,
      svc-input-bandwidth is used.  By default, the bandwidth
      reservation is only guaranteed at the access level.  The user can
      use the "end-to-end" leaf to request an end-to-end bandwidth
      reservation, including across the MPLS transport network.  (In
      other words, the SP will activate something in the MPLS core to
      ensure that the bandwidth request from the customer will be
      fulfilled by the MPLS core as well.)  How this is done (e.g., RSVP
      reservation, controller reservation) is out of scope for this
      document.

   Some constraints may not be offered by an SP; in this case, a
   deviation should be advertised.  In addition, due to network
   conditions, some constraints may not be completely fulfilled by the
   SP; in this case, the SP should advise the customer about the
   limitations.  How this communication is done is out of scope for this
   document.

   Example of service configuration using a standard QoS profile:

   <site-network-access>
    <site-network-access-id>1245HRTFGJGJ154654</site-network-access-id>
    <service>
     <svc-input-bandwidth>100000000</svc-input-bandwidth>
     <svc-output-bandwidth>100000000</svc-output-bandwidth>
     <qos>
      <qos-profile>
       <profile>PLATINUM</profile>
      </qos-profile>
     </qos>
    </service>
Top   noToC   RFC8049 - Page 80
   </site-network-access>
   <site-network-access>
    <site-network-access-id>555555AAAA2344</site-network-access-id>
    <service>
     <svc-input-bandwidth>2000000</svc-input-bandwidth>
     <svc-output-bandwidth>2000000</svc-output-bandwidth>
     <qos>
      <qos-profile>
       <profile>GOLD</profile>
      </qos-profile>
     </qos>
    </service>
   </site-network-access>

   Example of service configuration using a custom QoS profile:

   <site-network-access>
    <site-network-access-id>Site1</site-network-access-id>
    <service>
     <svc-input-bandwidth>100000000</svc-input-bandwidth>
     <svc-output-bandwidth>100000000</svc-output-bandwidth>
     <qos>
      <qos-profile>
       <classes>
        <class>
         <class-id>REAL_TIME</class-id>
         <rate-limit>10</rate-limit>
         <latency>
          <use-lowest-latency/>
         </latency>
        </class>
        <class>
         <class-id>DATA1</class-id>
         <latency>
          <latency-boundary>70</latency-boundary>
         </latency>
         <bandwidth>
          <guaranteed-bw-percent>80</guaranteed-bw-percent>
         </bandwidth>
        </class>
        <class>
         <class-id>DATA2</class-id>
         <latency>
          <latency-boundary>200</latency-boundary>
         </latency>
         <bandwidth>
          <guaranteed-bw-percent>5</guaranteed-bw-percent>
          <end-to-end/>
Top   noToC   RFC8049 - Page 81
         </bandwidth>
        </class>
       </classes>
      </qos-profile>
     </qos>
    </service>
   </site-network-access>

   The custom QoS profile for Site1 defines a REAL_TIME class with a
   latency constraint expressed as the lowest possible latency.  It also
   defines two data classes -- DATA1 and DATA2.  The two classes express
   a latency boundary constraint as well as a bandwidth reservation, as
   the REAL_TIME class is rate-limited to 10% of the service bandwidth
   (10% of 100 Mbps = 10 Mbps).  In cases where congestion occurs, the
   REAL_TIME traffic can go up to 10 Mbps (let's assume that only 5 Mbps
   are consumed).  DATA1 and DATA2 will share the remaining bandwidth
   (95 Mbps) according to their percentage.  So, the DATA1 class will be
   served with at least 76 Mbps of bandwidth, while the DATA2 class will
   be served with at least 4.75 Mbps.  The latency boundary information
   of the data class may help the SP define a specific buffer tuning or
   a specific routing within the network.  The maximum percentage to be
   used is not limited by this model but MUST be limited by the
   management system according to the policies authorized by the SP.

6.12.3. Multicast

The "multicast" container defines the type of site in the customer multicast service topology: source, receiver, or both. These parameters will help the management system optimize the multicast service. Users can also define the type of multicast relationship with the customer: router (requires a protocol such as PIM), host (IGMP or MLD), or both. An address family (IPv4, IPv6, or both) can also be defined.
Top   noToC   RFC8049 - Page 82

6.13. Enhanced VPN Features

6.13.1. Carriers' Carriers

In the case of CsC [RFC4364], a customer may want to build an MPLS service using an IP VPN to carry its traffic. LAN customer1 | | CE1 | | ------------- (vrf_cust1) CE1_ISP1 | ISP1 POP | MPLS link | ------------- | (vrf ISP1) PE1 (...) Provider backbone PE2 (vrf ISP1) | | ------------ | | MPLS link | ISP1 POP CE2_ISP1 (vrf_cust1) | ------------ | CE2 | LAN customer1 In the figure above, ISP1 resells an IP VPN service but has no core network infrastructure between its POPs. ISP1 uses an IP VPN as the core network infrastructure (belonging to another provider) between its POPs.
Top   noToC   RFC8049 - Page 83
   In order to support CsC, the VPN service must indicate MPLS support
   by setting the "carrierscarrier" leaf to true in the vpn-service
   list.  The link between CE1_ISP1/PE1 and CE2_ISP1/PE2 must also run
   an MPLS signalling protocol.  This configuration is done at the site
   level.

   In the proposed model, LDP or BGP can be used as the MPLS signalling
   protocol.  In the case of LDP, an IGP routing protocol MUST also be
   activated.  In the case of BGP signalling, BGP MUST also be
   configured as the routing protocol.

   If CsC is enabled, the requested "svc-mtu" leaf will refer to the
   MPLS MTU and not to the IP MTU.

6.14. External ID References

The service model sometimes refers to external information through identifiers. As an example, to order a cloud-access to a particular cloud service provider (CSP), the model uses an identifier to refer to the targeted CSP. If a customer is directly using this service model as an API (through REST or NETCONF, for example) to order a particular service, the SP should provide a list of authorized identifiers. In the case of cloud-access, the SP will provide the associated identifiers for each available CSP. The same applies to other identifiers, such as std-qos-profile, OAM profile-name, and provider-profile for encryption. How an SP provides the meanings of those identifiers to the customer is out of scope for this document.

6.15. Defining NNIs

An autonomous system (AS) is a single network or group of networks that is controlled by a common system administration group and that uses a single, clearly defined routing protocol. In some cases, VPNs need to span different ASes in different geographic areas or span different SPs. The connection between ASes is established by the SPs and is seamless to the customer. Examples include o a partnership between SPs (e.g., carrier, cloud) to extend their VPN service seamlessly. o an internal administrative boundary within a single SP (e.g., backhaul versus core versus data center). NNIs (network-to-network interfaces) have to be defined to extend the VPNs across multiple ASes.
Top   noToC   RFC8049 - Page 84
   [RFC4364] defines multiple flavors of VPN NNI implementations.  Each
   implementation has pros and cons; this topic is outside the scope of
   this document.  For example, in an Inter-AS option A, autonomous
   system border router (ASBR) peers are connected by multiple
   interfaces with at least one of those interfaces spanning the two
   ASes while being present in the same VPN.  In order for these ASBRs
   to signal unlabeled IP prefixes, they associate each interface with a
   VPN routing and forwarding (VRF) instance and a Border Gateway
   Protocol (BGP) session.  As a result, traffic between the
   back-to-back VRFs is IP.  In this scenario, the VPNs are isolated
   from each other, and because the traffic is IP, QoS mechanisms that
   operate on IP traffic can be applied to achieve customer service
   level agreements (SLAs).

     --------                 --------------              -----------
    /        \               /              \            /           \
   | Cloud    |             |                |          |             |
   | Provider |-----NNI-----|                |----NNI---| Data Center |
   |  #1      |             |                |          |             |
    \        /              |                |           \           /
     --------               |                |            -----------
                            |                |
     --------               |   My network   |           -----------
    /        \              |                |          /           \
   | Cloud    |             |                |         |             |
   | Provider |-----NNI-----|                |---NNI---|  L3VPN      |
   |  #2      |             |                |         |  Partner    |
    \        /              |                |         |             |
     --------               |                |         |             |
                             \              /          |             |
                              --------------            \           /
                                    |                    -----------
                                    |
                                   NNI
                                    |
                                    |
                            -------------------
                           /                   \
                          |                     |
                          |                     |
                          |                     |
                          |     L3VPN Partner   |
                          |                     |
                           \                   /
                            -------------------
Top   noToC   RFC8049 - Page 85
   The figure above describes an SP network called "My network" that has
   several NNIs.  This network uses NNIs to:

   o  increase its footprint by relying on L3VPN partners.

   o  connect its own data center services to the customer IP VPN.

   o  enable the customer to access its private resources located in a
      private cloud owned by some CSPs.

6.15.1. Defining an NNI with the Option A Flavor

AS A AS B ------------------- ------------------- / \ / \ | | | | | ++++++++ Inter-AS link ++++++++ | | + +_______________+ + | | + (VRF1)---(VPN1)----(VRF1) + | | + ASBR + + ASBR + | | + (VRF2)---(VPN2)----(VRF2) + | | + +_______________+ + | | ++++++++ ++++++++ | | | | | | | | | | | | | | ++++++++ Inter-AS link ++++++++ | | + +_______________+ + | | + (VRF1)---(VPN1)----(VRF1) + | | + ASBR + + ASBR + | | + (VRF2)---(VPN2)----(VRF2) + | | + +_______________+ + | | ++++++++ ++++++++ | | | | | | | | | \ / \ / ------------------- ------------------- In option A, the two ASes are connected to each other with physical links on ASBRs. For resiliency purposes, there may be multiple physical connections between the ASes. A VPN connection -- physical or logical (on top of physical) -- is created for each VPN that needs to cross the AS boundary, thus providing a back-to-back VRF model. From a service model's perspective, this VPN connection can be seen as a site. Let's say that AS B wants to extend some VPN connections for VPN C on AS A. The administrator of AS B can use this service model to order a site on AS A. All connection scenarios could be
Top   noToC   RFC8049 - Page 86
   realized using the features of the current model.  As an example, the
   figure above shows two physical connections that have logical
   connections per VPN overlaid on them.  This could be seen as a
   dual-homed subVPN scenario.  Also, the administrator of AS B will be
   able to choose the appropriate routing protocol (e.g., E-BGP) to
   dynamically exchange routes between ASes.

   This document assumes that the option A NNI flavor SHOULD reuse the
   existing VPN site modeling.

   Example: a customer wants its CSP A to attach its virtual network N
   to an existing IP VPN (VPN1) that he has from L3VPN SP B.

           CSP A                              L3VPN SP B

     -----------------                    -------------------
    /                 \                  /                   \
   |       |           |                |                     |
   |  VM --|       ++++++++  NNI    ++++++++                  |--- VPN1
   |       |       +      +_________+      +                  |   Site#1
   |       |--------(VRF1)---(VPN1)--(VRF1)+                  |
   |       |       + ASBR +         + ASBR +                  |
   |       |       +      +_________+      +                  |
   |       |       ++++++++         ++++++++                  |
   |  VM --|           |                |                     |--- VPN1
   |       |Virtual    |                |                     |   Site#2
   |       |Network    |                |                     |
   |  VM --|           |                |                     |--- VPN1
   |       |           |                |                     |   Site#3
    \                 /                  \                   /
     -----------------                    -------------------
                                                  |
                                                  |
                                                VPN1
                                               Site#4

   To create the VPN connectivity, the CSP or the customer may use the
   L3VPN service model that SP B exposes.  We could consider that, as
   the NNI is shared, the physical connection (bearer) between CSP A and
   SP B already exists.  CSP A may request through a service model the
   creation of a new site with a single site-network-access
   (single-homing is used in the figure).  As a placement constraint,
   CSP A may use the existing bearer reference it has from SP A to force
   the placement of the VPN NNI on the existing link.  The XML below
   illustrates a possible configuration request to SP B:
Top   noToC   RFC8049 - Page 87
   <site>
       <site-id>CSP_A_attachment</site-id>
       <location>
           <city>NY</city>
           <country-code>US</country-code>
       </location>
       <site-vpn-flavor>site-vpn-flavor-nni</site-vpn-flavor>
       <routing-protocols>
         <routing-protocol>
           <type>bgp</type>
           <bgp>
               <autonomous-system>500</autonomous-system>
               <address-family>ipv4</address-family>
           </bgp>
         </routing-protocol>
       </routing-protocols>
       <site-network-accesses>
        <site-network-access>
         <site-network-access-id>CSP_A_VN1</site-network-access-id>
          <ip-connection>
           <ipv4>
            <address-allocation-type>
            static-address
            </address-allocation-type>
            <addresses>
             <provider-address>203.0.113.1</provider-address>
             <customer-address>203.0.113.2</customer-address>
             <mask>30</mask>
            </addresses>
           </ipv4>
          </ip-connection>
          <service>
           <svc-input-bandwidth>450000000</svc-input-bandwidth>
           <svc-output-bandwidth>450000000</svc-output-bandwidth>
          </service>
          <vpn-attachment>
           <vpn-id>VPN1</vpn-id>
           <site-role>any-to-any-role</site-role>
          </vpn-attachment>
        </site-network-access>
       </site-network-accesses>
       <management>
           <type>customer-managed</type>
       </management>
   </site>
Top   noToC   RFC8049 - Page 88
   The case described above is different from a scenario using the
   cloud-accesses container, as the cloud-access provides a public cloud
   access while this example enables access to private resources located
   in a CSP network.

6.15.2. Defining an NNI with the Option B Flavor

AS A AS B ------------------- ------------------- / \ / \ | | | | | ++++++++ Inter-AS link ++++++++ | | + +_______________+ + | | + + + + | | + ASBR +<---MP-BGP---->+ ASBR + | | + + + + | | + +_______________+ + | | ++++++++ ++++++++ | | | | | | | | | | | | | | ++++++++ Inter-AS link ++++++++ | | + +_______________+ + | | + + + + | | + ASBR +<---MP-BGP---->+ ASBR + | | + + + + | | + +_______________+ + | | ++++++++ ++++++++ | | | | | | | | | \ / \ / ------------------- ------------------- In option B, the two ASes are connected to each other with physical links on ASBRs. For resiliency purposes, there may be multiple physical connections between the ASes. The VPN "connection" between ASes is done by exchanging VPN routes through MP-BGP [RFC4760]. There are multiple flavors of implementations of such an NNI. For example: 1. The NNI is internal to the provider and is situated between a backbone and a data center. There is enough trust between the domains to not filter the VPN routes. So, all the VPN routes are exchanged. RT filtering may be implemented to save some unnecessary route states.
Top   noToC   RFC8049 - Page 89
   2.  The NNI is used between providers that agreed to exchange VPN
       routes for specific RTs only.  Each provider is authorized to use
       the RT values from the other provider.

   3.  The NNI is used between providers that agreed to exchange VPN
       routes for specific RTs only.  Each provider has its own RT
       scheme.  So, a customer spanning the two networks will have
       different RTs in each network for a particular VPN.

   Case 1 does not require any service modeling, as the protocol enables
   the dynamic exchange of necessary VPN routes.

   Case 2 requires that an RT-filtering policy on ASBRs be maintained.
   From a service modeling point of view, it is necessary to agree on
   the list of RTs to authorize.

   In Case 3, both ASes need to agree on the VPN RT to exchange, as well
   as how to map a VPN RT from AS A to the corresponding RT in AS B (and
   vice versa).

   Those modelings are currently out of scope for this document.

          CSP A                               L3VPN SP B

     -----------------                    ------------------
    /                 \                  /                  \
   |       |           |                |                    |
   |  VM --|       ++++++++   NNI    ++++++++                |--- VPN1
   |       |       +      +__________+      +                |   Site#1
   |       |-------+      +          +      +                |
   |       |       + ASBR +<-MP-BGP->+ ASBR +                |
   |       |       +      +__________+      +                |
   |       |       ++++++++          ++++++++                |
   |  VM --|           |                |                    |--- VPN1
   |       |Virtual    |                |                    |   Site#2
   |       |Network    |                |                    |
   |  VM --|           |                |                    |--- VPN1
   |       |           |                |                    |   Site#3
    \                 /                 |                    |
     -----------------                  |                    |
                                         \                  /
                                          ------------------
                                                   |
                                                   |
                                                  VPN1
                                                 Site#4
Top   noToC   RFC8049 - Page 90
   The example above describes an NNI connection between CSP A and SP
   network B.  Both SPs do not trust themselves and use a different RT
   allocation policy.  So, in terms of implementation, the customer VPN
   has a different RT in each network (RT A in CSP A and RT B in SP
   network B).  In order to connect the customer virtual network in
   CSP A to the customer IP VPN (VPN1) in SP network B, CSP A should
   request that SP network B open the customer VPN on the NNI (accept
   the appropriate RT).  Who does the RT translation depends on the
   agreement between the two SPs: SP B may permit CSP A to request VPN
   (RT) translation.
Top   noToC   RFC8049 - Page 91

6.15.3. Defining an NNI with the Option C Flavor

AS A AS B ------------------- ------------------- / \ / \ | | | | | | | | | | | | | ++++++++ Multihop E-BGP ++++++++ | | + + + + | | + + + + | | + RGW +<----MP-BGP---->+ RGW + | | + + + + | | + + + + | | ++++++++ ++++++++ | | | | | | | | | | | | | | | | | | | | | | ++++++++ Inter-AS link ++++++++ | | + +_______________+ + | | + + + + | | + ASBR + + ASBR + | | + + + + | | + +_______________+ + | | ++++++++ ++++++++ | | | | | | | | | | | | | | ++++++++ Inter-AS link ++++++++ | | + +_______________+ + | | + + + + | | + ASBR + + ASBR + | | + + + + | | + +_______________+ + | | ++++++++ ++++++++ | | | | | | | | | \ / \ / ------------------- ------------------- From a VPN service's perspective, the option C NNI is very similar to option B, as an MP-BGP session is used to exchange VPN routes between the ASes. The difference is that the forwarding plane and the control plane are on different nodes, so the MP-BGP session is multihop between routing gateway (RGW) nodes.
Top   noToC   RFC8049 - Page 92
   From a VPN service's point of view, modeling options B and C will be
   identical.



(page 92 continued on part 5)

Next Section