Tech-invite3GPPspaceIETF RFCsSIP
929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 8299

YANG Data Model for L3VPN Service Delivery

Pages: 188
Proposed Standard
Errata
Obsoletes:  8049
Part 5 of 8 – Pages 81 to 103
First   Prev   Next

Top   ToC   RFC8299 - Page 81   prevText

6.7. Site Network Access Availability

A site may be multihomed, meaning that it has multiple site-network- access points. Placement constraints defined in previous sections will help ensure physical diversity. When the site-network-accesses are placed on the network, a customer may want to use a particular routing policy on those accesses. The "site-network-access/availability" container defines parameters for site redundancy. The "access-priority" leaf defines a preference for a particular access. This preference is used to model load- balancing or primary/backup scenarios. The higher the access- priority value, the higher the preference will be.
Top   ToC   RFC8299 - Page 82
   The figure below describes how the access-priority attribute can be
   used.

       Hub#1 LAN (Primary/backup)          Hub#2 LAN (Load-sharing)
         |                                                     |
         |    access-priority 1          access-priority 1     |
         |--- CE1 ------- PE1            PE3 --------- CE3 --- |
         |                                                     |
         |                                                     |
         |--- CE2 ------- PE2            PE4 --------- CE4 --- |
         |    access-priority 2          access-priority 1     |

                                 PE5
                                  |
                                  |
                                  |
                                 CE5
                                  |
                             Spoke#1 site (Single-homed)

   In the figure above, Hub#2 requires load-sharing, so all the site-
   network-accesses must use the same access-priority value.  On the
   other hand, as Hub#1 requires a primary site-network-access and a
   backup site-network-access, a higher access-priority setting will be
   configured on the primary site-network-access.

   Scenarios that are more complex can be modeled.  Let's consider a Hub
   site with five accesses to the network (A1,A2,A3,A4,A5).  The
   customer wants to load-share its traffic on A1,A2 in the nominal
   situation.  If A1 and A2 fail, the customer wants to load-share its
   traffic on A3 and A4; finally, if A1 to A4 are down, he wants to use
   A5.  We can model this easily by configuring the following access-
   priority values: A1=100, A2=100, A3=50, A4=50, A5=10.

   The access-priority scenario has some limitations.  An access-
   priority scenario like the previous one with five accesses but with
   the constraint of having traffic load-shared between A3 and A4 in the
   case where A1 OR A2 is down is not achievable.  But the authors
   believe that using the access-priority attribute will cover most of
   the deployment use cases and that the model can still be extended via
   augmentation to support additional use cases.

6.8. Traffic Protection

The service model supports the ability to protect the traffic for a site. Such protection provides a better level of availability in multihoming scenarios by, for example, using local-repair techniques
Top   ToC   RFC8299 - Page 83
   in case of failures.  The associated level of service guarantee would
   be based on an agreement between the customer and the SP and is out
   of scope for this document.

                 Site#1                            Site#2
             CE1 ----- PE1 -- P1            P3 -- PE3 ---- CE3
              |                              |             |
              |                              |             |
             CE2 ----- PE2 -- P2            P4 -- PE4 ---- CE4
                       /
                      /
             CE5 ----+
                Site#3

   In the figure above, we consider an IP VPN service with three sites,
   including two dual-homed sites (Site#1 and Site#2).  For dual-homed
   sites, we consider PE1-CE1 and PE3-CE3 as primary and PE2-CE2,PE4-CE4
   as backup for the example (even if protection also applies to load-
   sharing scenarios).

   In order to protect Site#2 against a failure, a user may set the
   "traffic-protection/enabled" leaf to true for Site#2.  How the
   traffic protection will be implemented is out of scope for this
   document.  However, in such a case, we could consider traffic coming
   from a remote site (Site#1 or Site#3), where the primary path would
   use PE3 as the egress PE.  PE3 may have preprogrammed a backup
   forwarding entry pointing to the backup path (through PE4-CE4) for
   all prefixes going through the PE3-CE3 link.  How the backup path is
   computed is out of scope for this document.  When the PE3-CE3 link
   fails, traffic is still received by PE3, but PE3 automatically
   switches traffic to the backup entry; the path will therefore be
   PE1-P1-(...)-P3-PE3-PE4-CE4 until the remote PEs reconverge and use
   PE4 as the egress PE.

6.9. Security

The "security" container defines customer-specific security parameters for the site. The security options supported in the model are limited but may be extended via augmentation.

6.9.1. Authentication

The current model does not support any authentication parameters for the site connection, but such parameters may be added in the "authentication" container through augmentation.
Top   ToC   RFC8299 - Page 84

6.9.2. Encryption

Traffic encryption can be requested on the connection. It may be performed at Layer 2 or Layer 3 by selecting the appropriate enumeration in the "layer" leaf. For example, an SP may use IPsec when a customer requests Layer 3 encryption. The encryption profile can be SP defined or customer specific. When an SP profile is used and a key (e.g., a pre-shared key) is allocated by the provider to be used by a customer, the SP should provide a way to communicate the key in a secured way to the customer. When a customer profile is used, the model supports only a pre-shared key for authentication of the site connection, with the pre-shared key provided through the NETCONF or RESTCONF request. A secure channel must be used to ensure that the pre-shared key cannot be intercepted. For security reasons, it may be necessary for the customer to change the pre-shared key on a regular basis. To perform a key change, the user can ask the SP to change the pre-shared key by submitting a new pre-shared key for the site configuration (as shown below with a corresponding XML snippet). This mechanism might not be hitless.
Top   ToC   RFC8299 - Page 85
      <?xml version="1.0"?>
      <l3vpn-svc xmlns="urn:ietf:params:xml:ns:yang:ietf-l3vpn-svc">
        <vpn-services>
          <vpn-service>
            <vpn-id>VPNA</vpn-id>
          </vpn-service>
        </vpn-services>
        <sites>
          <site>
            <site-id>SITE1</site-id>
            <site-network-accesses>
              <site-network-access>
                <site-network-access-id>1</site-network-access-id>
                <security>
                  <encryption>
                    <encryption-profile>
                      <preshared-key>MY_NEW_KEY</preshared-key>
                    </encryption-profile>
                  </encryption>
                </security>
              </site-network-access>
            </site-network-accesses>
          </site>
        </sites>
      </l3vpn-svc>

   A hitless key change mechanism may be added through augmentation.

   Other key-management methodologies (e.g., PKI) may be added through
   augmentation.

6.10. Management

The model defines three types of common management options: o provider-managed: The CE router is managed only by the provider. In this model, the responsibility boundary between the SP and the customer is between the CE and the customer network. o customer-managed: The CE router is managed only by the customer. In this model, the responsibility boundary between the SP and the customer is between the PE and the CE. o co-managed: The CE router is primarily managed by the provider; in addition, the SP allows customers to access the CE for configuration/monitoring purposes. In the co-managed mode, the responsibility boundary is the same as the responsibility boundary for the provider-managed model.
Top   ToC   RFC8299 - Page 86
   Based on the management model, different security options MAY be
   derived.

   In the co-managed case, the model defines options for the management
   address family (IPv4 or IPv6) and the associated management address.

6.11. Routing Protocols

"routing-protocol" defines which routing protocol must be activated between the provider and the customer router. The current model supports the following settings: bgp, rip, ospf, static, direct, and vrrp. The routing protocol defined applies at the provider-to-customer boundary. Depending on how the management model is administered, it may apply to the PE-CE boundary or the CE-to-customer boundary. In the case of a customer-managed site, the routing protocol defined will be activated between the PE and the CE router managed by the customer. In the case of a provider-managed site, the routing protocol defined will be activated between the CE managed by the SP and the router or LAN belonging to the customer. In this case, we expect the PE-CE routing to be configured based on the SP's rules, as both are managed by the same entity. Rtg protocol 192.0.2.0/24 ----- CE ----------------- PE1 Customer-managed site Rtg protocol Customer router ----- CE ----------------- PE1 Provider-managed site All the examples below will refer to a scenario for a customer- managed site.
Top   ToC   RFC8299 - Page 87

6.11.1. Handling of Dual Stack

All routing protocol types support dual stack by using the "address- family" leaf-list. Example of a corresponding XML snippet with dual stack using the same routing protocol: <?xml version="1.0"?> <l3vpn-svc xmlns="urn:ietf:params:xml:ns:yang:ietf-l3vpn-svc"> <vpn-services> <vpn-service> <vpn-id>VPNA</vpn-id> </vpn-service> </vpn-services> <sites> <site> <site-id>SITE1</site-id> <routing-protocols> <routing-protocol> <type>static</type> <static> <cascaded-lan-prefixes> <ipv4-lan-prefixes> <lan>192.0.2.0/24</lan> <next-hop>203.0.113.1</next-hop> </ipv4-lan-prefixes> <ipv6-lan-prefixes> <lan>2001:db8::1/64</lan> <next-hop>2001:db8::2</next-hop> </ipv6-lan-prefixes> </cascaded-lan-prefixes> </static> </routing-protocol> </routing-protocols> </site> </sites> </l3vpn-svc>
Top   ToC   RFC8299 - Page 88
   Example of a corresponding XML snippet with dual stack using two
   different routing protocols:

      <?xml version="1.0"?>
      <l3vpn-svc xmlns="urn:ietf:params:xml:ns:yang:ietf-l3vpn-svc">
        <vpn-services>
          <vpn-service>
            <vpn-id>VPNA</vpn-id>
          </vpn-service>
        </vpn-services>
        <sites>
          <site>
            <site-id>SITE1</site-id>
            <routing-protocols>
              <routing-protocol>
                <type>rip</type>
                <rip>
                  <address-family>ipv4</address-family>
                </rip>
              </routing-protocol>
              <routing-protocol>
                <type>ospf</type>
                <ospf>
                  <address-family>ipv6</address-family>
                  <area-address>4.4.4.4</area-address>
                </ospf>
              </routing-protocol>
            </routing-protocols>
          </site>
        </sites>
      </l3vpn-svc>

6.11.2. LAN Directly Connected to SP Network

The routing protocol type "direct" SHOULD be used when a customer LAN is directly connected to the provider network and must be advertised in the IP VPN. LAN attached directly to provider network: 192.0.2.0/24 ----- PE1 In this case, the customer has a default route to the PE address.
Top   ToC   RFC8299 - Page 89

6.11.3. LAN Directly Connected to SP Network with Redundancy

The routing protocol type "vrrp" SHOULD be used and advertised in the IP VPN when o the customer LAN is directly connected to the provider network, and o LAN redundancy is expected. LAN attached directly to provider network with LAN redundancy: 192.0.2.0/24 ------ PE1 | +--- PE2 In this case, the customer has a default route to the SP network.

6.11.4. Static Routing

The routing protocol type "static" MAY be used when a customer LAN is connected to the provider network through a CE router and must be advertised in the IP VPN. In this case, the static routes give next hops (nh) to the CE and to the PE. The customer has a default route to the SP network. Static rtg 192.0.2.0/24 ------ CE -------------- PE | | | Static route 192.0.2.0/24 nh CE Static route 0.0.0.0/0 nh PE

6.11.5. RIP Routing

The routing protocol type "rip" MAY be used when a customer LAN is connected to the provider network through a CE router and must be advertised in the IP VPN. For IPv4, the model assumes that RIP version 2 is used. In the case of dual-stack routing requested through this model, the management system will be responsible for configuring RIP (including the correct version number) and associated address families on network elements. RIP rtg 192.0.2.0/24 ------ CE -------------- PE
Top   ToC   RFC8299 - Page 90

6.11.6. OSPF Routing

The routing protocol type "ospf" MAY be used when a customer LAN is connected to the provider network through a CE router and must be advertised in the IP VPN. It can be used to extend an existing OSPF network and interconnect different areas. See [RFC4577] for more details. +---------------------+ | | OSPF | | OSPF area 1 | | area 2 (OSPF | | (OSPF area 1) --- CE ---------- PE PE ----- CE --- area 2) | | +---------------------+ The model also defines an option to create an OSPF sham link between two sites sharing the same area and having a backdoor link. The sham link is created by referencing the target site sharing the same OSPF area. The management system will be responsible for checking to see if there is already a sham link configured for this VPN and area between the same pair of PEs. If there is no existing sham link, the management system will provision one. This sham link MAY be reused by other sites. +------------------------+ | | | | | PE (--sham link--)PE | | | | | +----|----------------|--+ | OSPF area 1 | OSPF area 1 | | CE1 CE2 | | (OSPF area 1) (OSPF area 1) | | +----------------+ Regarding dual-stack support, the user MAY specify both IPv4 and IPv6 address families, if both protocols should be routed through OSPF. As OSPF uses separate protocol instances for IPv4 and IPv6, the management system will need to configure both OSPF version 2 and OSPF version 3 on the PE-CE link.
Top   ToC   RFC8299 - Page 91
   Other OSPF parameters, such as timers, are typically set by the SP
   and communicated to the customer outside the scope of this model.

   Example of a corresponding XML snippet with OSPF routing parameters
   in the service model:

      <?xml version="1.0"?>
      <l3vpn-svc xmlns="urn:ietf:params:xml:ns:yang:ietf-l3vpn-svc">
        <vpn-services>
          <vpn-service>
            <vpn-id>VPNA</vpn-id>
          </vpn-service>
        </vpn-services>
        <sites>
          <site>
            <site-id>SITE1</site-id>
            <routing-protocols>
              <routing-protocol>
                <type>ospf</type>
                <ospf>
                  <area-address>0.0.0.1</area-address>
                  <address-family>ipv4</address-family>
                  <address-family>ipv6</address-family>
                </ospf>
              </routing-protocol>
            </routing-protocols>
          </site>
        </sites>
      </l3vpn-svc>

   Example of PE configuration done by the management system:

                          router ospf 10
                           area 0.0.0.1
                            interface Ethernet0/0
                          !
                          router ospfv3 10
                           area 0.0.0.1
                            interface Ethernet0/0
                           !
Top   ToC   RFC8299 - Page 92

6.11.7. BGP Routing

The routing protocol type "bgp" MAY be used when a customer LAN is connected to the provider network through a CE router and must be advertised in the IP VPN. BGP rtg 192.0.2.0/24 ------ CE -------------- PE The session addressing will be derived from connection parameters as well as the SP's knowledge of the addressing plan that is in use. In the case of dual-stack access, the user MAY request BGP routing for both IPv4 and IPv6 by specifying both address families. It will be up to the SP and management system to determine how to describe the configuration (two BGP sessions, single, multi-session, etc.). This, along with other BGP parameters such as timers, is communicated to the customer outside the scope of this model. The service configuration below activates BGP on the PE-CE link for both IPv4 and IPv6. BGP activation requires the SP to know the address of the customer peer. If the site-network-access connection addresses are used for BGP peering, the "static-address" allocation type for the IP connection MUST be used. Other peering mechanisms are outside the scope of this model. An example of a corresponding XML snippet is described as follows: <?xml version="1.0"?> <l3vpn-svc xmlns="urn:ietf:params:xml:ns:yang:ietf-l3vpn-svc"> <vpn-services> <vpn-service> <vpn-id>VPNA</vpn-id> </vpn-service> </vpn-services> <sites> <site> <site-id>SITE1</site-id> <routing-protocols> <routing-protocol> <type>bgp</type> <bgp> <autonomous-system>65000</autonomous-system> <address-family>ipv4</address-family> <address-family>ipv6</address-family> </bgp> </routing-protocol>
Top   ToC   RFC8299 - Page 93
            </routing-protocols>
          </site>
        </sites>
      </l3vpn-svc>

   Depending on the SP flavor, a management system can divide this
   service configuration into different flavors, as shown by the
   following examples.

   Example of PE configuration done by the management system (single
   IPv4 transport session):

            router bgp 100
             neighbor 203.0.113.2 remote-as 65000
             address-family ipv4 vrf Cust1
                neighbor 203.0.113.2 activate
             address-family ipv6 vrf Cust1
                neighbor 203.0.113.2 activate
                neighbor 203.0.113.2 route-map SET-NH-IPV6 out

   Example of PE configuration done by the management system (two
   sessions):

                   router bgp 100
                    neighbor 203.0.113.2 remote-as 65000
                    neighbor 2001::2 remote-as 65000
                    address-family ipv4 vrf Cust1
                       neighbor 203.0.113.2 activate
                    address-family ipv6 vrf Cust1
                       neighbor 2001::2 activate

   Example of PE configuration done by the management system (multi-
   session):

            router bgp 100
             neighbor 203.0.113.2 remote-as 65000
             neighbor 203.0.113.2 multisession per-af
             address-family ipv4 vrf Cust1
                neighbor 203.0.113.2 activate
             address-family ipv6 vrf Cust1
                neighbor 203.0.113.2 activate
                neighbor 203.0.113.2 route-map SET-NH-IPV6 out

6.12. Service

The service defines service parameters associated with the site.
Top   ToC   RFC8299 - Page 94

6.12.1. Bandwidth

The service bandwidth refers to the bandwidth requirement between the PE and the CE (WAN link bandwidth). The requested bandwidth is expressed as svc-input-bandwidth and svc-output-bandwidth in bits per second. The input/output direction uses the customer site as a reference: "input bandwidth" means download bandwidth for the site, and "output bandwidth" means upload bandwidth for the site. The service bandwidth is only configurable at the site-network-access level. Using a different input and output bandwidth will allow the SP to determine if the customer allows for asymmetric bandwidth access, such as ADSL. It can also be used to set rate-limiting in a different way for uploading and downloading on a symmetric bandwidth access. The bandwidth is a service bandwidth expressed primarily as IP bandwidth, but if the customer enables MPLS for Carriers' Carriers (CsC), this becomes MPLS bandwidth.

6.12.2. MTU

The service MTU refers to the maximum PDU size that the customer may use. If the customer sends packets that are longer than the requested service MTU, the network may discard it (or for IPv4, fragment it).

6.12.3. QoS

The model defines QoS parameters in an abstracted way: o qos-classification-policy: policy that defines a set of ordered rules to classify customer traffic. o qos-profile: QoS scheduling profile to be applied.
6.12.3.1. QoS Classification
QoS classification rules are handled by the "qos-classification- policy" container. The qos-classification-policy container is an ordered list of rules that match a flow or application and set the appropriate target class of service (target-class-id). The user can define the match using an application reference or a flow definition that is more specific (e.g., based on Layer 3 source and destination addresses, Layer 4 ports, and Layer 4 protocol). When a flow definition is used, the user can employ a "target-sites" leaf-list to
Top   ToC   RFC8299 - Page 95
   identify the destination of a flow rather than using destination IP
   addresses.  In such a case, an association between the site
   abstraction and the IP addresses used by this site must be done
   dynamically.  How this association is done is out of scope for this
   document.  The association of a site to an IP VPN is done through the
   "vpn-attachment" container.  Therefore, the user can also employ
   "target-sites" leaf-list and "vpn-attachment" to identify the
   destination of a flow targeted to a specific VPN service.  A rule
   that does not have a match statement is considered a match-all rule.
   An SP may implement a default terminal classification rule if the
   customer does not provide it.  It will be up to the SP to determine
   its default target class.  The current model defines some
   applications, but new application identities may be added through
   augmentation.  The exact meaning of each application identity is up
   to the SP, so it will be necessary for the SP to advise the customer
   on the usage of application matching.

   Where the classification is done depends on the SP's implementation
   of the service, but classification concerns the flow coming from the
   customer site and entering the network.

                                           Provider network
                                      +-----------------------+
               192.0.2.0/24
            198.51.100.0/24 ---- CE --------- PE

              Traffic flow
             ---------->

   In the figure above, the management system should implement the
   classification rule:

   o  in the ingress direction on the PE interface, if the CE is
      customer-managed.

   o  in the ingress direction on the CE interface connected to the
      customer LAN, if the CE is provider-managed.

   The figure below describes a sample service description of QoS
   classification for a site:

     <?xml version="1.0"?>
     <l3vpn-svc xmlns="urn:ietf:params:xml:ns:yang:ietf-l3vpn-svc">
       <vpn-services>
         <vpn-service>
           <vpn-id>VPNA</vpn-id>
         </vpn-service>
Top   ToC   RFC8299 - Page 96
       </vpn-services>
       <sites>
         <site>
           <site-id>SITE1</site-id>
           <service>
             <qos>
               <qos-classification-policy>
                 <rule>
                   <id>SvrA-http</id>
                   <match-flow>
                     <ipv4-src-prefix>192.0.2.0/24</ipv4-src-prefix>
                     <ipv4-dst-prefix>203.0.113.1/32</ipv4-dst-prefix>
                     <l4-dst-port>80</l4-dst-port>
                     <protocol-type>tcp</protocol-type>
                   </match-flow>
                   <target-class-id>DATA2</target-class-id>
                 </rule>
                 <rule>
                   <id>SvrA-ftp</id>
                   <match-flow>
                     <ipv4-src-prefix>192.0.2.0/24</ipv4-src-prefix>
                     <ipv4-dst-prefix>203.0.113.1/32</ipv4-dst-prefix>
                     <l4-dst-port>21</l4-dst-port>
                     <protocol-field>tcp</protocol-field>
                   </match-flow>
                   <target-class-id>DATA2</target-class-id>
                 </rule>
                 <rule>
                   <id>p2p</id>
                   <match-application>p2p</match-application>
                   <target-class-id>DATA3</target-class-id>
                 </rule>
                 <rule>
                   <id>any</id>
                   <target-class-id>DATA1</target-class-id>
                 </rule>
               </qos-classification-policy>
             </qos>
           </service>
         </site>
       </sites>
     </l3vpn-svc>

   In the example above:

   o  HTTP traffic from the 192.0.2.0/24 LAN destined for 203.0.113.1/32
      will be classified in DATA2.
Top   ToC   RFC8299 - Page 97
   o  FTP traffic from the 192.0.2.0/24 LAN destined for 203.0.113.1/32
      will be classified in DATA2.

   o  Peer-to-peer traffic will be classified in DATA3.

   o  All other traffic will be classified in DATA1.

   The order of rule list entries is defined by the user.  The
   management system responsible for translating those rules in network
   element configuration MUST keep the same processing order in network
   element configuration.

6.12.3.2. QoS Profile
The user can choose either a standard profile provided by the operator or a custom profile. The "qos-profile" container defines the traffic-scheduling policy to be used by the SP. Provider network +-----------------------+ 192.0.2.0/24 198.51.100.0/24 ---- CE --------- PE \ / qos-profile A custom QoS profile is defined as a list of classes of services and associated properties. The properties are as follows: o direction: used to specify the direction to which the QoS profile is applied. This model supports three direction settings: "Site- to-WAN", "WAN-to-Site", and "both". By default, the "both" direction value is used. If the direction is "both", the provider should ensure scheduling according to the requested policy in both traffic directions (SP to customer and customer to SP). As an example, a device-scheduling policy may be implemented on both the PE side and the CE side of the WAN link. If the direction is "WAN-to-Site", the provider should ensure scheduling from the SP network to the customer site. As an example, a device-scheduling policy may be implemented only on the PE side of the WAN link towards the customer. o rate-limit: used to rate-limit the class of service. The value is expressed as a percentage of the global service bandwidth. When the qos-profile container is implemented on the CE side, svc-output-bandwidth is taken into account as a reference. When it is implemented on the PE side, svc-input-bandwidth is used.
Top   ToC   RFC8299 - Page 98
   o  latency: used to define the latency constraint of the class.  The
      latency constraint can be expressed as the lowest possible latency
      or a latency boundary expressed in milliseconds.  How this latency
      constraint will be fulfilled is up to the SP's implementation of
      the service: a strict priority queuing may be used on the access
      and in the core network, and/or a low-latency routing
      configuration may be created for this traffic class.

   o  jitter: used to define the jitter constraint of the class.  The
      jitter constraint can be expressed as the lowest possible jitter
      or a jitter boundary expressed in microseconds.  How this jitter
      constraint will be fulfilled is up to the SP's implementation of
      the service: a strict priority queuing may be used on the access
      and in the core network, and/or a jitter-aware routing
      configuration may be created for this traffic class.

   o  bandwidth: used to define a guaranteed amount of bandwidth for the
      class of service.  It is expressed as a percentage.  The
      "guaranteed-bw-percent" parameter uses available bandwidth as a
      reference.  When the qos-profile container is implemented on the
      CE side, svc-output-bandwidth is taken into account as a
      reference.  When it is implemented on the PE side, svc-input-
      bandwidth is used.  By default, the bandwidth reservation is only
      guaranteed at the access level.  The user can use the "end-to-end"
      leaf to request an end-to-end bandwidth reservation, including
      across the MPLS transport network.  (In other words, the SP will
      activate something in the MPLS core to ensure that the bandwidth
      request from the customer will be fulfilled by the MPLS core as
      well.)  How this is done (e.g., RSVP reservation, controller
      reservation) is out of scope for this document.

   In addition, due to network conditions, some constraints may not be
   completely fulfilled by the SP; in this case, the SP should advise
   the customer about the limitations.  How this communication is done
   is out of scope for this document.

   Example of service configuration using a standard QoS profile with
   the following corresponding XML snippet:

<?xml version="1.0"?>
<l3vpn-svc xmlns="urn:ietf:params:xml:ns:yang:ietf-l3vpn-svc">
 <vpn-profiles>
  <valid-provider-identifiers>
   <qos-profile-identifier>
    <id>GOLD</id>
   </qos-profile-identifier>
   <qos-profile-identifier>
    <id>PLATINUM</id>
Top   ToC   RFC8299 - Page 99
   </qos-profile-identifier>
  </valid-provider-identifiers>
 </vpn-profiles>
 <vpn-services>
  <vpn-service>
   <vpn-id>VPNA</vpn-id>
  </vpn-service>
 </vpn-services>
 <sites>
  <site>
   <site-id>SITE1</site-id>
   <locations>
    <location>
     <location-id>L1</location-id>
    </location>
   </locations>
   <site-network-accesses>
    <site-network-access>
     <site-network-access-id>1245HRTFGJGJ154654</site-network-access-id>
     <vpn-attachment>
      <vpn-id>VPNA</vpn-id>
      <site-role>spoke-role</site-role>
     </vpn-attachment>
     <ip-connection>
      <ipv4>
       <address-allocation-type>provider-dhcp</address-allocation-type>
      </ipv4>
      <ipv6>
       <address-allocation-type>provider-dhcp</address-allocation-type>
      </ipv6>
     </ip-connection>
     <security>
      <encryption>
       <layer>layer3</layer>
      </encryption>
     </security>
     <location-reference>L1</location-reference>
     <service>
      <svc-input-bandwidth>100000000</svc-input-bandwidth>
      <svc-output-bandwidth>100000000</svc-output-bandwidth>
      <svc-mtu>1514</svc-mtu>
      <qos>
       <qos-profile>
        <profile>PLATINUM</profile>
       </qos-profile>
      </qos>
     </service>
    </site-network-access>
Top   ToC   RFC8299 - Page 100
    <site-network-access>
     <site-network-access-id>555555AAAA2344</site-network-access-id>
     <vpn-attachment>
      <vpn-id>VPNA</vpn-id>
      <site-role>spoke-role</site-role>
     </vpn-attachment>
     <ip-connection>
      <ipv4>
       <address-allocation-type>provider-dhcp</address-allocation-type>
      </ipv4>
      <ipv6>
       <address-allocation-type>provider-dhcp</address-allocation-type>
      </ipv6>
     </ip-connection>
     <security>
      <encryption>
       <layer>layer3</layer>
      </encryption>
     </security>
     <location-reference>L1</location-reference>
     <service>
      <svc-input-bandwidth>2000000</svc-input-bandwidth>
      <svc-output-bandwidth>2000000</svc-output-bandwidth>
      <svc-mtu>1514</svc-mtu>
      <qos>
       <qos-profile>
        <profile>GOLD</profile>
       </qos-profile>
      </qos>
     </service>
    </site-network-access>
   </site-network-accesses>
  </site>
 </sites>
</l3vpn-svc>

   Example of service configuration using a custom QoS profile with the
   following corresponding XML snippet:

 <?xml version="1.0"?>
 <l3vpn-svc xmlns="urn:ietf:params:xml:ns:yang:ietf-l3vpn-svc">
  <vpn-profiles>
   <valid-provider-identifiers>
    <qos-profile-identifier>
     <id>GOLD</id>
    </qos-profile-identifier>
    <qos-profile-identifier>
     <id>PLATINUM</id>
Top   ToC   RFC8299 - Page 101
    </qos-profile-identifier>
   </valid-provider-identifiers>
  </vpn-profiles>
  <vpn-services>
   <vpn-service>
    <vpn-id>VPNA</vpn-id>
   </vpn-service>
  </vpn-services>
  <sites>
   <site>
    <site-id>SITE1</site-id>
    <locations>
     <location>
      <location-id>L1</location-id>
     </location>
    </locations>
    <site-network-accesses>
     <site-network-access>
      <site-network-access-id>Site1</site-network-access-id>
      <location-reference>L1</location-reference>
      <ip-connection>
       <ipv4>
        <address-allocation-type>provider-dhcp</address-allocation-type>
       </ipv4>
       <ipv6>
        <address-allocation-type>provider-dhcp</address-allocation-type>
       </ipv6>
      </ip-connection>
      <service>
       <svc-mtu>1514</svc-mtu>
       <svc-input-bandwidth>10000000</svc-input-bandwidth>
       <svc-output-bandwidth>10000000</svc-output-bandwidth>
      </service>
      <security>
       <encryption>
        <layer>layer3</layer>
       </encryption>
      </security>
      <location-reference>L1</location-reference>
      <vpn-attachment>
       <vpn-id>VPNA</vpn-id>
       <site-role>spoke-role</site-role>
      </vpn-attachment>
      <service>
       <svc-input-bandwidth>100000000</svc-input-bandwidth>
       <svc-output-bandwidth>100000000</svc-output-bandwidth>
       <qos>
        <qos-profile>
Top   ToC   RFC8299 - Page 102
         <classes>
          <class>
           <class-id>REAL_TIME</class-id>
           <direction>both</direction>
           <rate-limit>10</rate-limit>
           <latency>
            <use-lowest-latency/>
           </latency>
           <bandwidth>
            <guaranteed-bw-percent>80</guaranteed-bw-percent>
           </bandwidth>
          </class>
          <class>
           <class-id>DATA1</class-id>
           <latency>
            <latency-boundary>70</latency-boundary>
           </latency>
           <bandwidth>
            <guaranteed-bw-percent>80</guaranteed-bw-percent>
           </bandwidth>
          </class>
          <class>
           <class-id>DATA2</class-id>
           <latency>
            <latency-boundary>200</latency-boundary>
           </latency>
           <bandwidth>
            <guaranteed-bw-percent>5</guaranteed-bw-percent>
            <end-to-end/>
           </bandwidth>
          </class>
         </classes>
        </qos-profile>
       </qos>
      </service>
     </site-network-access>
    </site-network-accesses>
   </site>
  </sites>
 </l3vpn-svc>

   The custom QoS profile for Site1 defines a REAL_TIME class with a
   latency constraint expressed as the lowest possible latency.  It also
   defines two data classes -- DATA1 and DATA2.  The two classes express
   a latency boundary constraint as well as a bandwidth reservation, as
   the REAL_TIME class is rate-limited to 10% of the service bandwidth
   (10% of 100 Mbps = 10 Mbps).  In cases where congestion occurs, the
   REAL_TIME traffic can go up to 10 Mbps (let's assume that only 5 Mbps
Top   ToC   RFC8299 - Page 103
   are consumed).  DATA1 and DATA2 will share the remaining bandwidth
   (95 Mbps) according to their percentage.  So, the DATA1 class will be
   served with at least 76 Mbps of bandwidth, while the DATA2 class will
   be served with at least 4.75 Mbps.  The latency boundary information
   of the data class may help the SP define a specific buffer tuning or
   a specific routing within the network.  The maximum percentage to be
   used is not limited by this model but MUST be limited by the
   management system according to the policies authorized by the SP.

6.12.4. Multicast

The "multicast" container defines the type of site in the customer multicast service topology: source, receiver, or both. These parameters will help the management system optimize the multicast service. Users can also define the type of multicast relationship with the customer: router (requires a protocol such as PIM), host (IGMP or MLD), or both. An address family (IPv4, IPv6, or both) can also be defined.


(next page on part 6)

Next Section