Internet Engineering Task Force (IETF) P. Lapukhov
Request for Comments: 7938 Facebook
Category: Informational A. Premji
ISSN: 2070-1721 Arista Networks
J. Mitchell, Ed.
August 2016 Use of BGP for Routing in Large-Scale Data Centers
Some network operators build and operate data centers that support
over one hundred thousand servers. In this document, such data
centers are referred to as "large-scale" to differentiate them from
smaller infrastructures. Environments of this scale have a unique
set of network requirements with an emphasis on operational
simplicity and network stability. This document summarizes
operational experience in designing and operating large-scale data
centers using BGP as the only routing protocol. The intent is to
report on a proven and stable routing design that could be leveraged
by others in the industry.
Status of This Memo
This document is not an Internet Standards Track specification; it is
published for informational purposes.
This document is a product of the Internet Engineering Task Force
(IETF). It represents the consensus of the IETF community. It has
received public review and has been approved for publication by the
Internet Engineering Steering Group (IESG). Not all documents
approved by the IESG are a candidate for any level of Internet
Standard; see Section 2 of RFC 7841.
Information about the current status of this document, any errata,
and how to provide feedback on it may be obtained at
7. Routing Convergence Properties . . . . . . . . . . . . . . . 227.1. Fault Detection Timing . . . . . . . . . . . . . . . . . 227.2. Event Propagation Timing . . . . . . . . . . . . . . . . 237.3. Impact of Clos Topology Fan-Outs . . . . . . . . . . . . 247.4. Failure Impact Scope . . . . . . . . . . . . . . . . . . 247.5. Routing Micro-Loops . . . . . . . . . . . . . . . . . . . 268. Additional Options for Design . . . . . . . . . . . . . . . . 268.1. Third-Party Route Injection . . . . . . . . . . . . . . . 268.2. Route Summarization within Clos Topology . . . . . . . . 278.2.1. Collapsing Tier 1 Devices Layer . . . . . . . . . . . 278.2.2. Simple Virtual Aggregation . . . . . . . . . . . . . 298.3. ICMP Unreachable Message Masquerading . . . . . . . . . . 299. Security Considerations . . . . . . . . . . . . . . . . . . . 3010. References . . . . . . . . . . . . . . . . . . . . . . . . . 3010.1. Normative References . . . . . . . . . . . . . . . . . . 3010.2. Informative References . . . . . . . . . . . . . . . . . 31
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . 35
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 351. Introduction
This document describes a practical routing design that can be used
in a large-scale data center (DC) design. Such data centers, also
known as "hyper-scale" or "warehouse-scale" data centers, have a
unique attribute of supporting over a hundred thousand servers. In
order to accommodate networks of this scale, operators are revisiting
networking designs and platforms to address this need.
The design presented in this document is based on operational
experience with data centers built to support large-scale distributed
software infrastructure, such as a web search engine. The primary
requirements in such an environment are operational simplicity and
network stability so that a small group of people can effectively
support a significantly sized network.
Experimentation and extensive testing have shown that External BGP
(EBGP) [RFC4271] is well suited as a stand-alone routing protocol for
these types of data center applications. This is in contrast with
more traditional DC designs, which may use simple tree topologies and
rely on extending Layer 2 (L2) domains across multiple network
devices. This document elaborates on the requirements that led to
this design choice and presents details of the EBGP routing design as
well as exploring ideas for further enhancements.
This document first presents an overview of network design
requirements and considerations for large-scale data centers. Then,
traditional hierarchical data center network topologies are
contrasted with Clos networks [CLOS1953] that are horizontally scaled
out. This is followed by arguments for selecting EBGP with a Clos
topology as the most appropriate routing protocol to meet the
requirements and the proposed design is described in detail.
Finally, this document reviews some additional considerations and
design options. A thorough understanding of BGP is assumed by a
reader planning on deploying the design described within the
2. Network Design Requirements
This section describes and summarizes network design requirements for
large-scale data centers.
2.1. Bandwidth and Traffic Patterns
The primary requirement when building an interconnection network for
a large number of servers is to accommodate application bandwidth and
latency requirements. Until recently it was quite common to see the
majority of traffic entering and leaving the data center, commonly
referred to as "north-south" traffic. Traditional "tree" topologies
were sufficient to accommodate such flows, even with high
oversubscription ratios between the layers of the network. If more
bandwidth was required, it was added by "scaling up" the network
elements, e.g., by upgrading the device's linecards or fabrics or
replacing the device with one with higher port density.
Today many large-scale data centers host applications generating
significant amounts of server-to-server traffic, which does not
egress the DC, commonly referred to as "east-west" traffic. Examples
of such applications could be computer clusters such as Hadoop
[HADOOP], massive data replication between clusters needed by certain
applications, or virtual machine migrations. Scaling traditional
tree topologies to match these bandwidth demands becomes either too
expensive or impossible due to physical limitations, e.g., port
density in a switch.
2.2. CAPEX Minimization
The Capital Expenditures (CAPEX) associated with the network
infrastructure alone constitutes about 10-15% of total data center
expenditure (see [GREENBERG2009]). However, the absolute cost is
significant, and hence there is a need to constantly drive down the
cost of individual network elements. This can be accomplished in two
o Unifying all network elements, preferably using the same hardware
type or even the same device. This allows for volume pricing on
bulk purchases and reduced maintenance and inventory costs.
o Driving costs down using competitive pressures, by introducing
multiple network equipment vendors.
In order to allow for good vendor diversity, it is important to
minimize the software feature requirements for the network elements.
This strategy provides maximum flexibility of vendor equipment
choices while enforcing interoperability using open standards.
2.3. OPEX Minimization
Operating large-scale infrastructure can be expensive as a larger
amount of elements will statistically fail more often. Having a
simpler design and operating using a limited software feature set
minimizes software issue-related failures.
An important aspect of Operational Expenditure (OPEX) minimization is
reducing the size of failure domains in the network. Ethernet
networks are known to be susceptible to broadcast or unicast traffic
storms that can have a dramatic impact on network performance and
availability. The use of a fully routed design significantly reduces
the size of the data-plane failure domains, i.e., limits them to the
lowest level in the network hierarchy. However, such designs
introduce the problem of distributed control-plane failures. This
observation calls for simpler and less control-plane protocols to
reduce protocol interaction issues, reducing the chance of a network
meltdown. Minimizing software feature requirements as described in
the CAPEX section above also reduces testing and training
2.4. Traffic Engineering
In any data center, application load balancing is a critical function
performed by network devices. Traditionally, load balancers are
deployed as dedicated devices in the traffic forwarding path. The
problem arises in scaling load balancers under growing traffic
demand. A preferable solution would be able to scale the load-
balancing layer horizontally, by adding more of the uniform nodes and
distributing incoming traffic across these nodes. In situations like
this, an ideal choice would be to use network infrastructure itself
to distribute traffic across a group of load balancers. The
combination of anycast prefix advertisement [RFC4786] and Equal Cost
Multipath (ECMP) functionality can be used to accomplish this goal.
To allow for more granular load distribution, it is beneficial for
the network to support the ability to perform controlled per-hop
traffic engineering. For example, it is beneficial to directly
control the ECMP next-hop set for anycast prefixes at every level of
the network hierarchy.
2.5. Summarized Requirements
This section summarizes the list of requirements outlined in the
o REQ1: Select a topology that can be scaled "horizontally" by
adding more links and network devices of the same type without
requiring upgrades to the network elements themselves.
o REQ2: Define a narrow set of software features/protocols supported
by a multitude of networking equipment vendors.
o REQ3: Choose a routing protocol that has a simple implementation
in terms of programming code complexity and ease of operational
o REQ4: Minimize the failure domain of equipment or protocol issues
as much as possible.
o REQ5: Allow for some traffic engineering, preferably via explicit
control of the routing prefix next hop using built-in protocol
3. Data Center Topologies Overview
This section provides an overview of two general types of data center
designs -- hierarchical (also known as "tree-based") and Clos-based
3.1. Traditional DC Topology
In the networking industry, a common design choice for data centers
typically looks like an (upside down) tree with redundant uplinks and
three layers of hierarchy namely; core, aggregation/distribution, and
access layers (see Figure 1). To accommodate bandwidth demands, each
higher layer, from the server towards DC egress or WAN, has higher
port density and bandwidth capacity where the core functions as the
"trunk" of the tree-based design. To keep terminology uniform and
for comparison with other designs, in this document these layers will
be referred to as Tier 1, Tier 2 and Tier 3 "tiers", instead of core,
aggregation, or access layers.
| | | |
| |--| | Tier 1
| | | |
| | | |
+---------+ | | +----------+
| +-------+--+------+--+-------+ |
| | | | | | | |
+----+ +----+ +----+ +----+
| | | | | | | |
| |-----| | | |-----| | Tier 2
| | | | | | | |
+----+ +----+ +----+ +----+
| | | |
| | | |
| +-----+ | | +-----+ |
+-| |-+ +-| |-+ Tier 3
| | | | | |
<- Servers -> <- Servers ->
Figure 1: Typical DC Network Topology
Unfortunately, as noted previously, it is not possible to scale a
tree-based design to a large enough degree for handling large-scale
designs due to the inability to be able to acquire Tier 1 devices
with a large enough port density to sufficiently scale Tier 2. Also,
continuous upgrades or replacement of the upper-tier devices are
required as deployment size or bandwidth requirements increase, which
is operationally complex. For this reason, REQ1 is in place,
eliminating this type of design from consideration.
3.2. Clos Network Topology
This section describes a common design for horizontally scalable
topology in large-scale data centers in order to meet REQ1.
A common choice for a horizontally scalable topology is a folded Clos
topology, sometimes called "fat-tree" (for example, [INTERCON] and
[ALFARES2008]). This topology features an odd number of stages
(sometimes known as "dimensions") and is commonly made of uniform
elements, e.g., network switches with the same port count.
Therefore, the choice of folded Clos topology satisfies REQ1 and
facilitates REQ2. See Figure 2 below for an example of a folded
3-stage Clos topology (3 stages counting Tier 2 stage twice, when
tracing a packet flow):
| |------------------+ |
| |--------+ | |
+-------+ | | |
+-------+ | | |
| |--------+---------+-------+ |
| |--------+-------+ | | |
| |------+ | | | | |
+-------+ | | | | | |
+-------+ | | | | | |
| |------+-+-------+-+-----+ | |
| |------+-+-----+ | | | | |
| |----+ | | | | | | | |
+-------+ | | | | | | ---------> M links
Tier 1 | | | | | | | | |
+-------+ +-------+ +-------+
| | | | | |
| | | | | | Tier 2
| | | | | |
+-------+ +-------+ +-------+
| | | | | | | | |
| | | | | | ---------> N Links
| | | | | | | | |
O O O O O O O O O Servers
Figure 2: 3-Stage Folded Clos Topology
This topology is often also referred to as a "Leaf and Spine"
network, where "Spine" is the name given to the middle stage of the
Clos topology (Tier 1) and "Leaf" is the name of input/output stage
(Tier 2). For uniformity, this document will refer to these layers
using the "Tier n" notation.
3.2.2. Clos Topology Properties
The following are some key properties of the Clos topology:
o The topology is fully non-blocking, or more accurately non-
interfering, if M >= N and oversubscribed by a factor of N/M
otherwise. Here M and N is the uplink and downlink port count
respectively, for a Tier 2 switch as shown in Figure 2.
o Utilizing this topology requires control and data-plane support
for ECMP with a fan-out of M or more.
o Tier 1 switches have exactly one path to every server in this
topology. This is an important property that makes route
summarization dangerous in this topology (see Section 8.2 below).
o Traffic flowing from server to server is load balanced over all
available paths using ECMP.
3.2.3. Scaling the Clos Topology
A Clos topology can be scaled either by increasing network element
port density or by adding more stages, e.g., moving to a 5-stage
Clos, as illustrated in Figure 3 below:
Cluster | |
+----------------------------+ +--| |--+
| | | +-----+ |
| Tier 2 | | | Tier 2
| +-----+ | | +-----+ | +-----+
| +-------------| DEV |------+--| |--+--| |-------------+
| | +-----| C |------+ | | +--| |-----+ |
| | | +-----+ | +-----+ +-----+ | |
| | | | | |
| | | +-----+ | +-----+ +-----+ | |
| | +-----------| DEV |------+ | | +--| |-----------+ |
| | | | +---| D |------+--| |--+--| |---+ | | |
| | | | | +-----+ | | +-----+ | +-----+ | | | |
| | | | | | | | | | | |
| +-----+ +-----+ | | +-----+ | +-----+ +-----+
| | DEV | | DEV | | +--| |--+ | | | |
| | A | | B | Tier 3 | | | Tier 3 | | | |
| +-----+ +-----+ | +-----+ +-----+ +-----+
| | | | | | | | | |
| O O O O | O O O O
| Servers | Servers
Figure 3: 5-Stage Clos Topology
The small example of topology in Figure 3 is built from devices with
a port count of 4. In this document, one set of directly connected
Tier 2 and Tier 3 devices along with their attached servers will be
referred to as a "cluster". For example, DEV A, B, C, D, and the
servers that connect to DEV A and B, on Figure 3 form a cluster. The
concept of a cluster may also be a useful concept as a single
deployment or maintenance unit that can be operated on at a different
frequency than the entire topology.
In practice, Tier 3 of the network, which is typically Top-of-Rack
switches (ToRs), is where oversubscription is introduced to allow for
packaging of more servers in the data center while meeting the
bandwidth requirements for different types of applications. The main
reason to limit oversubscription at a single layer of the network is
to simplify application development that would otherwise need to
account for multiple bandwidth pools: within rack (Tier 3), between
racks (Tier 2), and between clusters (Tier 1). Since
oversubscription does not have a direct relationship to the routing
design, it is not discussed further in this document.
3.2.4. Managing the Size of Clos Topology Tiers
If a data center network size is small, it is possible to reduce the
number of switches in Tier 1 or Tier 2 of a Clos topology by a factor
of two. To understand how this could be done, take Tier 1 as an
example. Every Tier 2 device connects to a single group of Tier 1
devices. If half of the ports on each of the Tier 1 devices are not
being used, then it is possible to reduce the number of Tier 1
devices by half and simply map two uplinks from a Tier 2 device to
the same Tier 1 device that were previously mapped to different Tier
1 devices. This technique maintains the same bandwidth while
reducing the number of elements in Tier 1, thus saving on CAPEX. The
tradeoff, in this example, is the reduction of maximum DC size in
terms of overall server count by half.
In this example, Tier 2 devices will be using two parallel links to
connect to each Tier 1 device. If one of these links fails, the
other will pick up all traffic of the failed link, possibly resulting
in heavy congestion and quality of service degradation if the path
determination procedure does not take bandwidth amount into account,
since the number of upstream Tier 1 devices is likely wider than two.
To avoid this situation, parallel links can be grouped in link
aggregation groups (LAGs), e.g., [IEEE8023AD], with widely available
implementation settings that take the whole "bundle" down upon a
single link failure. Equivalent techniques that enforce "fate
sharing" on the parallel links can be used in place of LAGs to
achieve the same effect. As a result of such fate-sharing, traffic
from two or more failed links will be rebalanced over the multitude
of remaining paths that equals the number of Tier 1 devices. This
example is using two links for simplicity, having more links in a
bundle will have less impact on capacity upon a member-link failure.
4. Data Center Routing Overview
This section provides an overview of three general types of data
center protocol designs -- Layer 2 only, Hybrid Layer L2/L3, and
Layer 3 only.
4.1. L2-Only Designs
Originally, most data center designs used Spanning Tree Protocol
(STP) originally defined in [IEEE8021D-1990] for loop-free topology
creation, typically utilizing variants of the traditional DC topology
described in Section 3.1. At the time, many DC switches either did
not support Layer 3 routing protocols or supported them with
additional licensing fees, which played a part in the design choice.
Although many enhancements have been made through the introduction of
Rapid Spanning Tree Protocol (RSTP) in the latest revision of
[IEEE8021D-2004] and Multiple Spanning Tree Protocol (MST) specified
in [IEEE8021Q] that increase convergence, stability, and load-
balancing in larger topologies, many of the fundamentals of the
protocol limit its applicability in large-scale DCs. STP and its
newer variants use an active/standby approach to path selection, and
are therefore hard to deploy in horizontally scaled topologies as
described in Section 3.2. Further, operators have had many
experiences with large failures due to issues caused by improper
cabling, misconfiguration, or flawed software on a single device.
These failures regularly affected the entire spanning-tree domain and
were very hard to troubleshoot due to the nature of the protocol.
For these reasons, and since almost all DC traffic is now IP,
therefore requiring a Layer 3 routing protocol at the network edge
for external connectivity, designs utilizing STP usually fail all of
the requirements of large-scale DC operators. Various enhancements
to link-aggregation protocols such as [IEEE8023AD], generally known
as Multi-Chassis Link-Aggregation (M-LAG) made it possible to use
Layer 2 designs with active-active network paths while relying on STP
as the backup for loop prevention. The major downsides of this
approach are the lack of ability to scale linearly past two in most
implementations, lack of standards-based implementations, and the
added failure domain risk of syncing state between the devices.
It should be noted that building large, horizontally scalable,
L2-only networks without STP is possible recently through the
introduction of the Transparent Interconnection of Lots of Links
(TRILL) protocol in [RFC6325]. TRILL resolves many of the issues STP
has for large-scale DC design however, due to the limited number of
implementations, and often the requirement for specific equipment
that supports it, this has limited its applicability and increased
the cost of such designs.
Finally, neither the base TRILL specification nor the M-LAG approach
totally eliminate the problem of the shared broadcast domain that is
so detrimental to the operations of any Layer 2, Ethernet-based
solution. Later TRILL extensions have been proposed to solve the
this problem statement, primarily based on the approaches outlined in
[RFC7067], but this even further limits the number of available
interoperable implementations that can be used to build a fabric.
Therefore, TRILL-based designs have issues meeting REQ2, REQ3, and
4.2. Hybrid L2/L3 Designs
Operators have sought to limit the impact of data-plane faults and
build large-scale topologies through implementing routing protocols
in either the Tier 1 or Tier 2 parts of the network and dividing the
Layer 2 domain into numerous, smaller domains. This design has
allowed data centers to scale up, but at the cost of complexity in
managing multiple network protocols. For the following reasons,
operators have retained Layer 2 in either the access (Tier 3) or both
access and aggregation (Tier 3 and Tier 2) parts of the network:
o Supporting legacy applications that may require direct Layer 2
adjacency or use non-IP protocols.
o Seamless mobility for virtual machines that require the
preservation of IP addresses when a virtual machine moves to a
different Tier 3 switch.
o Simplified IP addressing = less IP subnets are required for the
o Application load balancing may require direct Layer 2 reachability
to perform certain functions such as Layer 2 Direct Server Return
(DSR). See [L3DSR].
o Continued CAPEX differences between L2- and L3-capable switches.
4.3. L3-Only Designs
Network designs that leverage IP routing down to Tier 3 of the
network have gained popularity as well. The main benefit of these
designs is improved network stability and scalability, as a result of
confining L2 broadcast domains. Commonly, an Interior Gateway
Protocol (IGP) such as Open Shortest Path First (OSPF) [RFC2328] is
used as the primary routing protocol in such a design. As data
centers grow in scale, and server count exceeds tens of thousands,
such fully routed designs have become more attractive.
Choosing a L3-only design greatly simplifies the network,
facilitating the meeting of REQ1 and REQ2, and has widespread
adoption in networks where large Layer 2 adjacency and larger size
Layer 3 subnets are not as critical compared to network scalability
and stability. Application providers and network operators continue
to develop new solutions to meet some of the requirements that
previously had driven large Layer 2 domains by using various overlay
or tunneling techniques.
5. Routing Protocol Design
In this section, the motivations for using External BGP (EBGP) as the
single routing protocol for data center networks having a Layer 3
protocol design and Clos topology are reviewed. Then, a practical
approach for designing an EBGP-based network is provided.
5.1. Choosing EBGP as the Routing Protocol
REQ2 would give preference to the selection of a single routing
protocol to reduce complexity and interdependencies. While it is
common to rely on an IGP in this situation, sometimes with either the
addition of EBGP at the device bordering the WAN or Internal BGP
(IBGP) throughout, this document proposes the use of an EBGP-only
Although EBGP is the protocol used for almost all Inter-Domain
Routing in the Internet and has wide support from both vendor and
service provider communities, it is not generally deployed as the
primary routing protocol within the data center for a number of
reasons (some of which are interrelated):
o BGP is perceived as a "WAN-only, protocol-only" and not often
considered for enterprise or data center applications.
o BGP is believed to have a "much slower" routing convergence
compared to IGPs.
o Large-scale BGP deployments typically utilize an IGP for BGP next-
hop resolution as all nodes in the IBGP topology are not directly
o BGP is perceived to require significant configuration overhead and
does not support neighbor auto-discovery.
This document discusses some of these perceptions, especially as
applicable to the proposed design, and highlights some of the
advantages of using the protocol such as:
o BGP has less complexity in parts of its protocol design --
internal data structures and state machine are simpler as compared
to most link-state IGPs such as OSPF. For example, instead of
implementing adjacency formation, adjacency maintenance and/or
flow-control, BGP simply relies on TCP as the underlying
transport. This fulfills REQ2 and REQ3.
o BGP information flooding overhead is less when compared to link-
state IGPs. Since every BGP router calculates and propagates only
the best-path selected, a network failure is masked as soon as the
BGP speaker finds an alternate path, which exists when highly
symmetric topologies, such as Clos, are coupled with an EBGP-only
design. In contrast, the event propagation scope of a link-state
IGP is an entire area, regardless of the failure type. In this
way, BGP better meets REQ3 and REQ4. It is also worth mentioning
that all widely deployed link-state IGPs feature periodic
refreshes of routing information while BGP does not expire routing
state, although this rarely impacts modern router control planes.
o BGP supports third-party (recursively resolved) next hops. This
allows for manipulating multipath to be non-ECMP-based or
forwarding-based on application-defined paths, through
establishment of a peering session with an application
"controller" that can inject routing information into the system,
satisfying REQ5. OSPF provides similar functionality using
concepts such as "Forwarding Address", but with more difficulty in
implementation and far less control of information propagation
o Using a well-defined Autonomous System Number (ASN) allocation
scheme and standard AS_PATH loop detection, "BGP path hunting"
(see [JAKMA2008]) can be controlled and complex unwanted paths
will be ignored. See Section 5.2 for an example of a working ASN
allocation scheme. In a link-state IGP, accomplishing the same
goal would require multi-(instance/topology/process) support,
typically not available in all DC devices and quite complex to
configure and troubleshoot. Using a traditional single flooding
domain, which most DC designs utilize, under certain failure
conditions may pick up unwanted lengthy paths, e.g., traversing
multiple Tier 2 devices.
o EBGP configuration that is implemented with minimal routing policy
is easier to troubleshoot for network reachability issues. In
most implementations, it is straightforward to view contents of
the BGP Loc-RIB and compare it to the router's Routing Information
Base (RIB). Also, in most implementations, an operator can view
every BGP neighbors Adj-RIB-In and Adj-RIB-Out structures, and
therefore incoming and outgoing Network Layer Reachability
Information (NLRI) information can be easily correlated on both
sides of a BGP session. Thus, BGP satisfies REQ3.
5.2. EBGP Configuration for Clos Topology
Clos topologies that have more than 5 stages are very uncommon due to
the large numbers of interconnects required by such a design.
Therefore, the examples below are made with reference to the 5-stage
Clos topology (in unfolded state).
5.2.1. EBGP Configuration Guidelines and Example ASN Scheme
The diagram below illustrates an example of an ASN allocation scheme.
The following is a list of guidelines that can be used:
o EBGP single-hop sessions are established over direct point-to-
point links interconnecting the network nodes, no multi-hop or
loopback sessions are used, even in the case of multiple links
between the same pair of nodes.
o Private Use ASNs from the range 64512-65534 are used to avoid ASN
o A single ASN is allocated to all of the Clos topology's Tier 1
o A unique ASN is allocated to each set of Tier 2 devices in the
o A unique ASN is allocated to every Tier 3 device (e.g., ToR) in
| +-----+ |
| | | |
| | +-----+ | |
ASN 646XX | | | | ASN 646XX
+---------+ | | | | +---------+
| +-----+ | | | +-----+ | | | +-----+ |
+-----------|-| |-|-+-|-| |-|-+-|-| |-|-----------+
| +---|-| |-|-+ | | | | +-|-| |-|---+ |
| | | +-----+ | | +-----+ | | +-----+ | | |
| | | | | | | | | |
| | | | | | | | | |
| | | +-----+ | | +-----+ | | +-----+ | | |
| +-----+---|-| |-|-+ | | | | +-|-| |-|---+-----+ |
| | | +-|-| |-|-+-|-| |-|-+-|-| |-|-+ | | |
| | | | | +-----+ | | | +-----+ | | | +-----+ | | | | |
| | | | +---------+ | | | | +---------+ | | | |
| | | | | | | | | | | |
+-----+ +-----+ | | +-----+ | | +-----+ +-----+
| ASN | | | +-|-| |-|-+ | | | |
|65YYY| | ... | | | | | | ... | | ... |
+-----+ +-----+ | +-----+ | +-----+ +-----+
| | | | +---------+ | | | |
O O O O <- Servers -> O O O O
Figure 4: BGP ASN Layout for 5-Stage Clos5.2.2. Private Use ASNs
The original range of Private Use ASNs [RFC6996] limited operators to
1023 unique ASNs. Since it is quite likely that the number of
network devices may exceed this number, a workaround is required.
One approach is to re-use the ASNs assigned to the Tier 3 devices
across different clusters. For example, Private Use ASNs 65001,
65002 ... 65032 could be used within every individual cluster and
assigned to Tier 3 devices.
To avoid route suppression due to the AS_PATH loop detection
mechanism in BGP, upstream EBGP sessions on Tier 3 devices must be
configured with the "Allowas-in" feature [ALLOWASIN] that allows
accepting a device's own ASN in received route advertisements.
Although this feature is not standardized, it is widely available
across multiple vendors implementations. Introducing this feature
does not make routing loops more likely in the design since the
AS_PATH is being added to by routers at each of the topology tiers
and AS_PATH length is an early tie breaker in the BGP path selection
process. Further loop protection is still in place at the Tier 1
device, which will not accept routes with a path including its own
ASN. Tier 2 devices do not have direct connectivity with each other.
Another solution to this problem would be to use Four-Octet ASNs
([RFC6793]), where there are additional Private Use ASNs available,
see [IANA.AS]. Use of Four-Octet ASNs puts additional protocol
complexity in the BGP implementation and should be balanced against
the complexity of re-use when considering REQ3 and REQ4. Perhaps
more importantly, they are not yet supported by all BGP
implementations, which may limit vendor selection of DC equipment.
When supported, ensure that deployed implementations are able to
remove the Private Use ASNs when external connectivity
(Section 5.2.4) to these ASNs is required.
5.2.3. Prefix Advertisement
A Clos topology features a large number of point-to-point links and
associated prefixes. Advertising all of these routes into BGP may
create Forwarding Information Base (FIB) overload in the network
devices. Advertising these links also puts additional path
computation stress on the BGP control plane for little benefit.
There are two possible solutions:
o Do not advertise any of the point-to-point links into BGP. Since
the EBGP-based design changes the next-hop address at every
device, distant networks will automatically be reachable via the
advertising EBGP peer and do not require reachability to these
prefixes. However, this may complicate operations or monitoring:
e.g., using the popular "traceroute" tool will display IP
addresses that are not reachable.
o Advertise point-to-point links, but summarize them on every
device. This requires an address allocation scheme such as
allocating a consecutive block of IP addresses per Tier 1 and Tier
2 device to be used for point-to-point interface addressing to the
lower layers (Tier 2 uplinks will be allocated from Tier 1 address
blocks and so forth).
Server subnets on Tier 3 devices must be announced into BGP without
using route summarization on Tier 2 and Tier 1 devices. Summarizing
subnets in a Clos topology results in route black-holing under a
single link failure (e.g., between Tier 2 and Tier 3 devices), and
hence must be avoided. The use of peer links within the same tier to
resolve the black-holing problem by providing "bypass paths" is
undesirable due to O(N^2) complexity of the peering-mesh and waste of
ports on the devices. An alternative to the full mesh of peer links
would be to use a simpler bypass topology, e.g., a "ring" as
described in [FB4POST], but such a topology adds extra hops and has
limited bandwidth. It may require special tweaks to make BGP routing
work, e.g., splitting every device into an ASN of its own. Later in
this document, Section 8.2 introduces a less intrusive method for
performing a limited form of route summarization in Clos networks and
discusses its associated tradeoffs.
5.2.4. External Connectivity
A dedicated cluster (or clusters) in the Clos topology could be used
for the purpose of connecting to the Wide Area Network (WAN) edge
devices, or WAN Routers. Tier 3 devices in such a cluster would be
replaced with WAN routers, and EBGP peering would be used again,
though WAN routers are likely to belong to a public ASN if Internet
connectivity is required in the design. The Tier 2 devices in such a
dedicated cluster will be referred to as "Border Routers" in this
document. These devices have to perform a few special functions:
o Hide network topology information when advertising paths to WAN
routers, i.e., remove Private Use ASNs [RFC6996] from the AS_PATH
attribute. This is typically done to avoid ASN number collisions
between different data centers and also to provide a uniform
AS_PATH length to the WAN for purposes of WAN ECMP to anycast
prefixes originated in the topology. An implementation-specific
BGP feature typically called "Remove Private AS" is commonly used
to accomplish this. Depending on implementation, the feature
should strip a contiguous sequence of Private Use ASNs found in an
AS_PATH attribute prior to advertising the path to a neighbor.
This assumes that all ASNs used for intra data center numbering
are from the Private Use ranges. The process for stripping the
Private Use ASNs is not currently standardized, see [REMOVAL].
However, most implementations at least follow the logic described
in this vendor's document [VENDOR-REMOVE-PRIVATE-AS], which is
enough for the design specified.
o Originate a default route to the data center devices. This is the
only place where a default route can be originated, as route
summarization is risky for the unmodified Clos topology.
Alternatively, Border Routers may simply relay the default route
learned from WAN routers. Advertising the default route from
Border Routers requires that all Border Routers be fully connected
to the WAN Routers upstream, to provide resistance to a single-
link failure causing the black-holing of traffic. To prevent
black-holing in the situation when all of the EBGP sessions to the
WAN routers fail simultaneously on a given device, it is more
desirable to readvertise the default route rather than originating
the default route via complicated conditional route origination
schemes provided by some implementations [CONDITIONALROUTE].
5.2.5. Route Summarization at the Edge
It is often desirable to summarize network reachability information
prior to advertising it to the WAN network due to the high amount of
IP prefixes originated from within the data center in a fully routed
network design. For example, a network with 2000 Tier 3 devices will
have at least 2000 servers subnets advertised into BGP, along with
the infrastructure prefixes. However, as discussed in Section 5.2.3,
the proposed network design does not allow for route summarization
due to the lack of peer links inside every tier.
However, it is possible to lift this restriction for the Border
Routers by devising a different connectivity model for these devices.
There are two options possible:
o Interconnect the Border Routers using a full-mesh of physical
links or using any other "peer-mesh" topology, such as ring or
hub-and-spoke. Configure BGP accordingly on all Border Leafs to
exchange network reachability information, e.g., by adding a mesh
of IBGP sessions. The interconnecting peer links need to be
appropriately sized for traffic that will be present in the case
of a device or link failure in the mesh connecting the Border
o Tier 1 devices may have additional physical links provisioned
toward the Border Routers (which are Tier 2 devices from the
perspective of Tier 1). Specifically, if protection from a single
link or node failure is desired, each Tier 1 device would have to
connect to at least two Border Routers. This puts additional
requirements on the port count for Tier 1 devices and Border
Routers, potentially making it a nonuniform, larger port count,
device compared with the other devices in the Clos. This also
reduces the number of ports available to "regular" Tier 2
switches, and hence the number of clusters that could be
interconnected via Tier 1.
If any of the above options are implemented, it is possible to
perform route summarization at the Border Routers toward the WAN
network core without risking a routing black-hole condition under a
single link failure. Both of the options would result in nonuniform
topology as additional links have to be provisioned on some network