protocol designers need to provide a full set of security services, which can be used where appropriate. The techniques discussed here include encryption, authentication, filtering, firewalls, access control, isolation, aggregation, and others. Often, security is achieved by careful protocol design, rather than by adding a security method. For example, one method of mitigating DoS attacks is to make sure that innocent parties cannot be used to amplify the attack. Security works better when it is "designed in" rather than "added on". Nothing is ever 100% secure. Defense therefore involves protecting against those attacks that are most likely to occur or that have the most direct consequences if successful. For those attacks that are protected against, absolute protection is seldom achievable; more often it is sufficient just to make the cost of a successful attack greater than what the adversary will be willing or able to expend. Successfully defending against an attack does not necessarily mean the attack must be prevented from happening or from reaching its target. In many cases, the network can instead be designed to withstand the attack. For example, the introduction of inauthentic packets could be defended against by preventing their introduction in the first place, or by making it possible to identify and eliminate them before delivery to the MPLS/GMPLS user's system. The latter is frequently a much easier task.
techniques may, however, be useful against resource exhaustion attacks based on the exhaustion of state information (e.g., TCP SYN attacks). The MPLS data plane, as presently defined, is not amenable to source authentication, as there are no source identifiers in the MPLS packet to authenticate. The MPLS label is only locally meaningful. It may be assigned by a downstream node or upstream node for multicast support. When the MPLS payload carries identifiers that may be authenticated (e.g., IP packets), authentication may be carried out at the client level, but this does not help the MPLS SP, as these client identifiers belong to an external, untrusted network. Section 5.1.1, authentication should be bidirectional.
key systems. Another approach is to use a hierarchical Certification Authority system to provide digital certificates. This section describes or provides references to the specific cryptographic approaches for authenticating identity. These approaches provide secure mechanisms for most of the authentication scenarios required in securing an MPLS/GMPLS network. Section 5.1). Cryptographic methods add complexity to a service and thus, for a few reasons, may not be the most practical solution in every case. Cryptography adds an additional computational burden to devices, which may reduce the number of user connections that can be handled on a device or otherwise reduce the capacity of the device, potentially driving up the provider's costs. Typically, configuring encryption services on devices adds to the complexity of their configuration and adds labor cost. Some key management system is usually needed. Packet sizes are typically increased when the packets are encrypted or have integrity checks or replay counters added, increasing the network traffic load and adding to the likelihood of packet fragmentation with its increased overhead. (This packet length increase can often be mitigated to some extent by data compression techniques, but at the expense of additional computational burden.) Finally, some providers may employ enough other defensive techniques, such as physical isolation or filtering and firewall techniques, that they may not perceive additional benefit from encryption techniques. Users may wish to provide confidentiality end to end. Generally, encrypting for confidentiality must be accompanied with cryptographic integrity checks to prevent certain active attacks against the encrypted communications. On today's processors, encryption and integrity checks run extremely quickly, but key management may be more demanding in terms of both computational and administrative overhead.
The trust model among the MPLS/GMPLS user, the MPLS/GMPLS provider, and other parts of the network is a major element in determining the applicability of cryptographic protection for any specific MPLS/GMPLS implementation. In particular, it determines where cryptographic protection should be applied: - If the data path between the user's site and the provider's PE is not trusted, then it may be used on the PE-CE link. - If some part of the backbone network is not trusted, particularly in implementations where traffic may travel across the Internet or multiple providers' networks, then the PE-PE traffic may be cryptographically protected. One also should consider cases where L1 technology may be vulnerable to eavesdropping. - If the user does not trust any zone outside of its premises, it may require end-to-end or CE-CE cryptographic protection. This fits within the scope of this MPLS/GMPLS security framework when the CE is provisioned by the MPLS/GMPLS provider. - If the user requires remote access to its site from a system at a location that is not a customer location (for example, access by a traveler), there may be a requirement for cryptographically protecting the traffic between that system and an access point or a customer's site. If the MPLS/GMPLS provider supplies the access point, then the customer must cooperate with the provider to handle the access control services for the remote users. These access control services are usually protected cryptographically, as well. Access control usually starts with authentication of the entity. If cryptographic services are part of the scenario, then it is important to bind the authentication to the key management. Otherwise, the protocol is vulnerable to being hijacked between the authentication and key management. Although CE-CE cryptographic protection can provide integrity and confidentiality against third parties, if the MPLS/GMPLS provider has complete management control over the CE (encryption) devices, then it may be possible for the provider to gain access to the user's traffic or internal network. Encryption devices could potentially be reconfigured to use null encryption, bypass cryptographic processing altogether, reveal internal configuration, or provide some means of sniffing or diverting unencrypted traffic. Thus an implementation using CE-CE encryption needs to consider the trust relationship between the MPLS/GMPLS user and provider. MPLS/GMPLS users and providers may wish to negotiate a service level agreement (SLA) for CE-CE encryption that provides an acceptable demarcation of
responsibilities for management of cryptographic protection on the CE devices. The demarcation may also be affected by the capabilities of the CE devices. For example, the CE might support some partitioning of management, a configuration lock-down ability, or shared capability to verify the configuration. In general, the MPLS/GMPLS user needs to have a fairly high level of trust that the MPLS/GMPLS provider will properly provision and manage the CE devices, if the managed CE-CE model is used. RFC4301] [RFC4302] [RFC4835] [RFC4306] [RFC4309] [RFC2411] [IPSECME-ROADMAP] is the security protocol of choice for protection at the IP layer. IPsec provides robust security for IP traffic between pairs of devices. Non-IP traffic, such as IS-IS routing, must be converted to IP (e.g., by encapsulation) in order to use IPsec. When the MPLS is encapsulating IP traffic, then IPsec covers the encryption of the IP client layer; for non-IP client traffic, see Section 5.2.4 (MPLS PWs). In the MPLS/GMPLS model, IPsec can be employed to protect IP traffic between PEs, between a PE and a CE, or from CE to CE. CE-to-CE IPsec may be employed in either a provider-provisioned or a user- provisioned model. Likewise, IPsec protection of data performed within the user's site is outside the scope of this document, because it is simply handled as user data by the MPLS/GMPLS core. However, if the SP performs compression, pre-encryption will have a major effect on that operation. IPsec does not itself specify cryptographic algorithms. It can use a variety of integrity or confidentiality algorithms (or even combined integrity and confidentiality algorithms) with various key lengths, such as AES encryption or AES message integrity checks. There are trade-offs between key length, computational burden, and the level of security of the encryption. A full discussion of these trade-offs is beyond the scope of this document. In practice, any currently recommended IPsec protection offers enough security to reduce the likelihood of its being directly targeted by an attacker substantially; other weaker links in the chain of security are likely to be attacked first. MPLS/GMPLS users may wish to use a Service Level Agreement (SLA) specifying the SP's responsibility for ensuring data integrity and confidentiality, rather than analyzing the specific encryption techniques used in the MPLS/GMPLS service. Encryption algorithms generally come with two parameters: mode such as Cipher Block Chaining and key length such as AES-192. (This should not be confused with two other senses in which the word "mode" is used: IPsec itself can be used in Tunnel Mode or Transport Mode,
and IKE [version 1] uses Main Mode, Aggressive Mode, or Quick Mode). It should be stressed that IPsec encryption without an integrity check is a state of sin. For many of the MPLS/GMPLS provider's network control messages and some user requirements, cryptographic authentication of messages without encryption of the contents of the message may provide appropriate security. Using IPsec, authentication of messages is provided by the Authentication Header (AH) or through the use of the Encapsulating Security Protocol (ESP) with NULL encryption. Where control messages require integrity but do not use IPsec, other cryptographic authentication methods are often available. Message authentication methods currently considered to be secure are based on hashed message authentication codes (HMAC) [RFC2104] implemented with a secure hash algorithm such as Secure Hash Algorithm 1 (SHA-1) [RFC3174]. No attacks against HMAC SHA-1 are likely to play out in the near future, but it is possible that people will soon find SHA-1 collisions. Thus, it is important that mechanisms be designed to be flexible about the choice of hash functions and message integrity checks. Also, many of these mechanisms do not include a convenient way to manage and update keys. A mechanism to provide a combination of confidentiality, data-origin authentication, and connectionless integrity is the use of AES in GCM (Counter with CBC-MAC) mode (RFC 4106) [RFC4106]. RFC4308] or [RFC4869] provides more than adequate security.
STD8] or terminal-like connections to allow device configuration. - SNMPv3 [STD62] provides encrypted and authenticated protection for SNMP-managed devices. - Transport Layer Security (TLS) [RFC5246] and the closely-related Secure Sockets Layer (SSL) are widely used for securing HTTP-based communication, and thus can provide support for most XML- and SOAP-based device management approaches. - Since 2004, there has been extensive work proceeding in several organizations (OASIS, W3C, WS-I, and others) on securing device management traffic within a "Web Services" framework, using a wide variety of security models, and providing support for multiple security token formats, multiple trust domains, multiple signature formats, and multiple encryption technologies. - IPsec provides security services including integrity and confidentiality at the network layer. With regards to device management, its current use is primarily focused on in-band management of user-managed IPsec gateway devices. - There is recent work in the ISMS WG (Integrated Security Model for SNMP Working Group) to define how to use SSH to secure SNMP, due to the limited deployment of SNMPv3, and the possibility of using Kerberos, particularly for interfaces like TELNET, where client code exists. RFC3985].
PW tunnels may be set up using the PWE control protocol based on LDP [RFC4447], and thus security considerations for LDP will most likely be applicable to the PWE3 control protocol as well. PW user packets contain at least one MPLS label (the PW label) and may contain one or more MPLS tunnel labels. After the label stack, there is a four-byte control word (which is optional for some PW types), followed by the native service payload. It must be stressed that encapsulation of MPLS PW packets in IP for the purpose of enabling use of IPsec mechanisms is not a valid option. The following is a non-exhaustive list of PW-specific threats: - Unauthorized setup of a PW (e.g., to gain access to a customer network) - Unauthorized teardown of a PW (thus causing denial of service) - Malicious reroute of a PW - Unauthorized observation of PW packets - Traffic analysis of PW connectivity - Unauthorized insertion of PW packets - Unauthorized modification of PW packets - Unauthorized deletion of PW packets replay of PW packets - Denial of service or significant impact on PW service quality These threats are not mutually exclusive, for example, rerouting can be used for snooping or insertion/deletion/replay, etc. Multisegment PWs introduce additional weaknesses at their stitching points. The PW user plane suffers from the following inherent security weaknesses: - Since the PW label is the only identifier in the packet, there is no authenticatable source address. - Since guessing a valid PW label is not difficult, it is relatively easy to introduce seemingly valid foreign packets. - Since the PW packet is not self-describing, minor modification of control-plane packets renders the data-plane traffic useless.
- The control-word sequence number processing algorithm is susceptible to a DoS attack. The PWE control protocol introduces its own weaknesses: - No (secure) peer autodiscovery technique has been standardized . - PE authentication is not mandated, so an intruder can potentially impersonate a PE; after impersonating a PE, unauthorized PWs may be set up, consuming resources and perhaps allowing access to user networks. - Alternately, desired PWs may be torn down, giving rise to denial of service. The following characteristics of PWs can be considered security strengths: - The most obvious attacks require compromising edge or core routers (although not necessarily those along the PW path). - Adequate protection of the control-plane messaging is sufficient to rule out many types of attacks. - PEs are usually configured to reject MPLS packets from outside the service provider network, thus ruling out insertion of PW packets from the outside (since IP packets cannot masquerade as PW packets).
Figure 3 depicts a simplified topology showing the Customer Edge (CE) devices, the Provider Edge (PE) devices, and a variable number (three are shown) of Provider core (P) devices, which might be present along the path between two sites in a single VPN operated by a single service provider (SP). Site_1---CE---PE---P---P---P---PE---CE---Site_2 Figure 3: Simplified Topology Traversing through MPLS/GMPLS Core Within this simplified topology, and assuming that the P devices are not involved with cryptographic protection, four basic, feasible configurations exist for protecting connections among the devices: 1) Site-to-site (CE-to-CE) - Apply confidentiality or integrity services between the two CE devices, so that traffic will be protected throughout the SP's network. 2) Provider edge-to-edge (PE-to-PE) - Apply confidentiality or integrity services between the two PE devices. Unprotected traffic is received at one PE from the customer's CE, then it is protected for transmission through the SP's network to the other PE, and finally it is decrypted or checked for integrity and sent to the other CE. 3) Access link (CE-to-PE) - Apply confidentiality or integrity services between the CE and PE on each side or on only one side. 4) Configurations 2 and 3 above can also be combined, with confidentiality or integrity running from CE to PE, then PE to PE, and then PE to CE. Among the four feasible configurations, key tradeoffs in considering encryption include: - Vulnerability to link eavesdropping or tampering - assuming an attacker can observe or modify data in transit on the links, would it be protected by encryption? - Vulnerability to device compromise - assuming an attacker can get access to a device (or freely alter its configuration), would the data be protected? - Complexity of device configuration and management - given the number of sites per VPN customer as Nce and the number of PEs participating in a given VPN as Npe, how many device configurations need to be created or maintained, and how do those configurations scale?
- Processing load on devices - how many cryptographic operations must be performed given N packets? - This raises considerations of device capacity and perhaps end-to-end delay. - Ability of the SP to provide enhanced services (QoS, firewall, intrusion detection, etc.) - Can the SP inspect the data to provide these services? These tradeoffs are discussed for each configuration, below: 1) Site-to-site (CE-to-CE) Link eavesdropping or tampering - protected on all links. Device compromise - vulnerable to CE compromise. Complexity - single administration, responsible for one device per site (Nce devices), but overall configuration per VPN scales as Nce**2. Though the complexity may be reduced: 1) In practice, as Nce grows, the number of VPNs falls off from being a full clique; 2) If the CEs run an automated key management protocol, then they should be able to set up and tear down secured VPNs without any intervention. Processing load - on each of the two CEs, each packet is cryptographically processed (2P), though the protection may be "integrity check only" or "integrity check plus encryption." Enhanced services - severely limited; typically only Diffserv markings are visible to the SP, allowing some QoS services. The CEs could also use the IPv6 Flow Label to identify traffic classes. 2) Provider Edge-to-Edge (PE-to-PE) Link eavesdropping or tampering - vulnerable on CE-PE links; protected on SP's network links. Device compromise - vulnerable to CE or PE compromise. Complexity - single administration, Npe devices to configure. (Multiple sites may share a PE device so Npe is typically much smaller than Nce.) Scalability of the overall configuration depends on the PPVPN type: if the cryptographic protection is separate per VPN context, it scales as Npe**2 per customer VPN. If it is per-PE, it scales as Npe**2 for all customer VPNs combined.
Processing load - on each of the two PEs, each packet is cryptographically processed (2P). Enhanced services - full; SP can apply any enhancements based on detailed view of traffic. 3) Access Link (CE-to-PE) Link eavesdropping or tampering - protected on CE-PE link; vulnerable on SP's network links. Device compromise - vulnerable to CE or PE compromise. Complexity - two administrations (customer and SP) with device configuration on each side (Nce + Npe devices to configure), but because there is no mesh, the overall configuration scales as Nce. Processing load - on each of the two CEs, each packet is cryptographically processed, plus on each of the two PEs, each packet is cryptographically processed (4P). Enhanced services - full; SP can apply any enhancements based on a detailed view of traffic. 4) Combined Access link and PE-to-PE (essentially hop-by-hop). Link eavesdropping or tampering - protected on all links. Device compromise - vulnerable to CE or PE compromise. Complexity - two administrations (customer and SP) with device configuration on each side (Nce + Npe devices to configure). Scalability of the overall configuration depends on the PPVPN type: If the cryptographic processing is separate per VPN context, it scales as Npe**2 per customer VPN. If it is per- PE, it scales as Npe**2 for all customer VPNs combined. Processing load - on each of the two CEs, each packet is cryptographically processed, plus on each of the two PEs, each packet is cryptographically processed twice (6P). Enhanced services - full; SP can apply any enhancements based on a detailed view of traffic.
Given the tradeoffs discussed above, a few conclusions can be drawn: - Configurations 2 and 3 are subsets of 4 that may be appropriate alternatives to 4 under certain threat models; the remainder of these conclusions compare 1 (CE-to-CE) versus 4 (combined access links and PE-to-PE). - If protection from link eavesdropping or tampering is all that is important, then configurations 1 and 4 are equivalent. - If protection from device compromise is most important and the threat is to the CE devices, both cases are equivalent; if the threat is to the PE devices, configuration 1 is better. - If reducing complexity is most important, and the size of the network is small, configuration 1 is better. Otherwise, configuration 4 is better because rather than a mesh of CE devices, it requires a smaller mesh of PE devices. Also, under some PPVPN approaches, the scaling of 4 is further improved by sharing the same PE-PE mesh across all VPN contexts. The scaling advantage of 4 may be increased or decreased in any given situation if the CE devices are simpler to configure than the PE devices, or vice-versa. - If the overall processing load is a key factor, then 1 is better, unless the PEs come with a hardware encryption accelerator and the CEs do not. - If the availability of enhanced services support from the SP is most important, then 4 is best. - If users are concerned with having their VPNs misconnected with other users' VPNs, then encryption with 1 can provide protection. As a quick overall conclusion, CE-to-CE protection is better against device compromise, but this comes at the cost of enhanced services and at the cost of operational complexity due to the Order(n**2) scaling of a larger mesh. This analysis of site-to-site vs. hop-by-hop tradeoffs does not explicitly include cases of multiple providers cooperating to provide a PPVPN service, public Internet VPN connectivity, or remote access VPN service, but many of the tradeoffs are similar.
In addition to the simplified models, the following should also be considered: - There are reasons, perhaps, to protect a specific P-to-P or PE- to-P. - There may be reasons to do multiple encryptions over certain segments. One may be using an encrypted wireless link under our IPsec VPN to access an SSL-secured web site to download encrypted email attachments: four layers.) - It may be appropriate that, for example, cryptographic integrity checks are applied end to end, and confidentiality is applied over a shorter span. - Different cryptographic protection may be required for control protocols and data traffic. - Attention needs to be given to how auxiliary traffic is protected, e.g., the ICMPv6 packets that flow back during PMTU discovery, among other examples. Section 5.4 of this document. In this document, we distinguish between filtering and firewalls based primarily on the direction of traffic flow. We define filtering as being applicable to unidirectional traffic, while a firewall can analyze and control both sides of a conversation. The definition has two significant corollaries: - Routing or traffic flow symmetry: A firewall typically requires routing symmetry, which is usually enforced by locating a firewall where the network topology assures that both sides of a conversation will pass through the firewall. A filter can operate upon traffic flowing in one direction, without considering traffic in the reverse direction. Beware that this concept could result in a single point of failure.
- Statefulness: Because it receives both sides of a conversation, a firewall may be able to interpret a significant amount of information concerning the state of that conversation and use this information to control access. A filter can maintain some limited state information on a unidirectional flow of packets, but cannot determine the state of the bidirectional conversation as precisely as a firewall. For a general description on filtering and rate limiting for IP networks, please also see [OPSEC-FILTER]. RFC4301]. In discussing filters, it is useful to separate the filter characteristics that may be used to determine whether a packet matches a filter from the packet actions applied to those packets matching a particular filter. o Filter Characteristics Filter characteristics or rules are used to determine whether a particular packet or set of packets matches a particular filter. In many cases, filter characteristics may be stateless. A stateless filter determines whether a particular packet matches a filter based solely on the filter definition, normal forwarding information (such as the next hop for a packet), the interface on which a packet arrived, and the contents of that individual packet. Typically, stateless filters may consider the incoming and outgoing logical or physical interface, information in the IP header, and information in higher-layer headers such as the TCP or UDP header. Information in the IP header to be considered may for example include source and destination IP addresses; Protocol field, Fragment Offset, and TOS field in IPv4; or Next Header, Extension Headers, Flow label, etc. in IPv6. Filters also may consider fields in the TCP or UDP header such as the Port numbers, the SYN field in the TCP header, as well as ICMP and ICMPv6 type.
Stateful filtering maintains packet-specific state information to aid in determining whether a filter rule has been met. For example, a device might apply stateless filtering to the first fragment of a fragmented IPv4 packet. If the filter matches, then the data unit ID may be remembered and other fragments of the same packet may then be considered to match the same filter. Stateful filtering is more commonly done in firewalls, although firewall technology may be added to routers. The data unit ID can also be a Fragment Extension Header Identification field in IPv6. o Actions based on Filter Results If a packet, or a series of packets, matches a specific filter, then a variety of actions may be taken based on that match. Examples of such actions include: - Discard In many cases, filters are set to catch certain undesirable packets. Examples may include packets with forged or invalid source addresses, packets that are part of a DoS or Distributed DoS (DDoS) attack, or packets trying to access unallowed resources (such as network management packets from an unauthorized source). Where such filters are activated, it is common to discard the packet or set of packets matching the filter silently. The discarded packets may of course also be counted or logged. - Set CoS A filter may be used to set the class of service associated with the packet. - Count packets or bytes - Rate Limit In some cases, the set of packets matching a particular filter may be limited to a specified bandwidth. In this case, packets or bytes would be counted, and would be forwarded normally up to the specified limit. Excess packets may be discarded or may be marked (for example, by setting a "discard eligible" bit in the IPv4 ToS field, or changing the EXP value to identify traffic as being out of contract).
- Forward and Copy It is useful in some cases to forward some set of packets normally, but also to send a copy to a specified other address or interface. For example, this may be used to implement a lawful intercept capability or to feed selected packets to an Intrusion Detection System. o Other Packet Filters Issues Filtering performance may vary widely according to implementation and the types and number of rules. Without acceptable performance, filtering is not useful. The precise definition of "acceptable" may vary from SP to SP, and may depend upon the intended use of the filters. For example, for some uses, a filter may be turned on all the time to set CoS, to prevent an attack, or to mitigate the effect of a possible future attack. In this case, it is likely that the SP will want the filter to have minimal or no impact on performance. In other cases, a filter may be turned on only in response to a major attack (such as a major DDoS attack). In this case, a greater performance impact may be acceptable to some service providers. A key consideration with the use of packet filters is that they can provide few options for filtering packets carrying encrypted data. Because the data itself is not accessible, only packet header information or other unencrypted fields can be used for filtering.
MPLS/GMPLS user sites, but typically other defensive techniques will be used for this purpose. Where firewalls are employed as a service to protect user VPN sites from the Internet, different VPN users, and even different sites of a single VPN user, may have varying firewall requirements. The overall PPVPN logical and physical topology, along with the capabilities of the devices implementing the firewall services, has a significant effect on the feasibility and manageability of such varied firewall service offerings. Another consideration with the use of firewalls is that they can provide few options for handling packets carrying encrypted data. Because the data itself is not accessible, only packet header information, other unencrypted fields, or analysis of the flow of encrypted packets can be used for making decisions on accepting or rejecting encrypted traffic. Two approaches of using firewalls are to move the firewall outside of the encrypted part of the path or to register and pre-approve the encrypted session with the firewall. Handling DoS attacks has become increasingly important. Useful guidelines include the following: 1. Perform ingress filtering everywhere. 2. Be able to filter DoS attack packets at line speed. 3. Do not allow oneself to amplify attacks. 4. Continue processing legitimate traffic. Over provide for heavy loads. Use diverse locations, technologies, etc. Section 5.1). However, additional security may be provided by controlling access to management interfaces in other ways. The Optical Internetworking Forum has done relevant work on protecting such interfaces with TLS, SSH, Kerberos, IPsec, WSS, etc. See "Security for Management Interfaces to Network Elements" [OIF-SMI-01.0] and "Addendum to the Security for Management Interfaces to Network Elements" [OIF-SMI-02.1]. See also the work in the ISMS WG (http://datatracker.ietf.org/wg/isms/charter/).
Management interfaces, especially console ports on MPLS/GMPLS devices, may be configured so they are only accessible out-of-band, through a system that is physically or logically separated from the rest of the MPLS/GMPLS infrastructure. Where management interfaces are accessible in-band within the MPLS/GMPLS domain, filtering or firewalling techniques can be used to restrict unauthorized in-band traffic from having access to management interfaces. Depending on device capabilities, these filtering or firewalling techniques can be configured either on other devices through which the traffic might pass, or on the individual MPLS/GMPLS devices themselves. RFC4864]. In a GMPLS network, it is possible to operate the control plane using physically separate resources from those used for the data plane. This means that the data-plane resources can be physically protected and isolated from other equipment to protect users' data while the control and management traffic uses network resources that can be accessed by operators to configure the network. Conversely, the separation of control and data traffic may lead the operator to consider that the network is secure because the data-plane resources are physically secure. However, this is not the case if the control plane can be attacked through a shared or open network, and control- plane protection techniques must still be applied.
nonetheless there will still be multiple MPLS/GMPLS users sharing the same network resources. In some cases, MPLS/GMPLS services will share network resources with Internet services or other services. It is therefore important for MPLS/GMPLS services to provide protection between resources used by different parties. Thus, a well-behaved MPLS/GMPLS user should be protected from possible misbehavior by other users. This requires several security measurements to be implemented. Resource limits can be placed on a per service and per user basis. Possibilities include, for example, using a virtual router or logical router to define hardware or software resource limits per service or per individual user; using rate limiting per Virtual Routing and Forwarding (VRF) or per Internet connection to provide bandwidth protection; or using resource reservation for control-plane traffic. In addition to bandwidth protection, separate resource allocation can be used to limit security attacks only to directly impacted service(s) or customer(s). Strict, separate, and clearly defined engineering rules and provisioning procedures can reduce the risks of network-wide impact of a control-plane attack, DoS attack, or misconfiguration. In general, the use of aggregated infrastructure allows the service provider to benefit from stochastic multiplexing of multiple bursty flows, and also may in some cases thwart traffic pattern analysis by combining the data from multiple users. However, service providers must minimize security risks introduced from any individual service or individual users.
much ensures that there is no unintended connectivity to some other site. Section 4 of this document. Many of the defensive techniques described in this document and elsewhere provide significant levels of protection from a variety of threats. However, in addition to employing defensive techniques silently to protect against attacks, MPLS/GMPLS services can also add value for both providers and customers by implementing security monitoring systems to detect and report on any security attacks, regardless of whether the attacks are effective. Attackers often begin by probing and analyzing defenses, so systems that can detect and properly report these early stages of attacks can provide significant benefits. Information concerning attack incidents, especially if available quickly, can be useful in defending against further attacks. It can be used to help identify attackers or their specific targets at an early stage. This knowledge about attackers and targets can be used to strengthen defenses against specific attacks or attackers, or to improve the defenses for specific targets on an as-needed basis. Information collected on attacks may also be useful in identifying and developing defenses against novel attack types. Monitoring systems used to detect security attacks in MPLS/GMPLS typically operate by collecting information from the Provider Edge (PE), Customer Edge (CE), and/or Provider backbone (P) devices. Security monitoring systems should have the ability to actively retrieve information from devices (e.g., SNMP get) or to passively receive reports from devices (e.g., SNMP notifications). The systems may actively retrieve information from devices (e.g., SNMP get) or passively receive reports from devices (e.g., SNMP notifications).
The specific information exchanged depends on the capabilities of the devices and on the type of VPN technology. Particular care should be given to securing the communications channel between the monitoring systems and the MPLS/GMPLS devices. The CE, PE, and P devices should employ efficient methods to acquire and communicate the information needed by the security monitoring systems. It is important that the communication method between MPLS/GMPLS devices and security monitoring systems be designed so that it will not disrupt network operations. As an example, multiple attack events may be reported through a single message, rather than allowing each attack event to trigger a separate message, which might result in a flood of messages, essentially becoming a DoS attack against the monitoring system or the network. The mechanisms for reporting security attacks should be flexible enough to meet the needs of MPLS/GMPLS service providers, MPLS/GMPLS customers, and regulatory agencies, if applicable. The specific reports should depend on the capabilities of the devices, the security monitoring system, the type of VPN, and the service level agreements between the provider and customer. While SNMP/syslog type monitoring and detection mechanisms can detect some attacks (usually resulting from flapping protocol adjacencies, CPU overload scenarios, etc.), other techniques, such as netflow- based traffic fingerprinting, are needed for more detailed detection and reporting. With netflow-based traffic fingerprinting, each packet that is forwarded within a device is examined for a set of IP packet attributes. These attributes are the IP packet identity or fingerprint of the packet and determine if the packet is unique or similar to other packets. The flow information is extremely useful for understanding network behavior, and detecting and reporting security attacks: - Source address allows the understanding of who is originating the traffic. - Destination address tells who is receiving the traffic. - Ports characterize the application utilizing the traffic. - Class of service examines the priority of the traffic.
- The device interface tells how traffic is being utilized by the network device. - Tallied packets and bytes show the amount of traffic. - Flow timestamps allow the understanding of the life of a flow; timestamps are useful for calculating packets and bytes per second. - Next-hop IP addresses including BGP routing Autonomous Systems (ASes). - Subnet mask for the source and destination addresses are for calculating prefixes. - TCP flags are useful for examining TCP handshakes.