The following are not required or are non-goals. This should not be
taken to mean that these issues must not be addressed by a new
architecture. Rather, addressing these issues or not is purely an
optional matter for the architects.
2.2.1. Forwarding Table Optimization
We believe that it is not necessary for the architecture to minimize
the size of the forwarding tables (FIBs). Current memory sizes,
speeds, and prices, along with processor and Application-specific
Integrated Circuit (ASIC) capabilities allow forwarding tables to be
very large, O(E6), and allow fast (100 M lookups/second) tables to be
built with little difficulty.
2.2.2. Traffic Engineering
"Traffic engineering" is one of those terms that has become terribly
overloaded. If one asks N people what traffic engineering is, one
would get something like N! disjoint answers. Therefore, we elect
not to require "traffic engineering", per se. Instead, we have
endeavored to determine what the ultimate intent is when operators
"traffic engineer" their networks and then make those capabilities an
inherent part of the system.
The new architecture is not designed explicitly to be an inter-domain
multicast routing architecture. However, given the notable lack of a
viable, robust, and widely deployed inter-domain multicast routing
architecture, the architecture should not hinder the development and
deployment of inter-domain multicast routing without an adverse
effect on meeting the other requirements.
We do note however that one respected network sage [Clark91] has said
When you see a bunch of engineers standing around congratulating
themselves for solving some particularly ugly problem in
networking, go up to them, whisper "multicast", jump back, and
watch the fun begin...
2.2.4. Quality of Service (QoS)
The architecture concerns itself primarily with disseminating network
topology information so that routers may select paths to destinations
and build appropriate forwarding tables. Quality of Service (QoS) is
not a part of this function and we make no requirements with respect
However, QoS is an area of great and evolving interest. It is
reasonable to expect that in the not too distant future,
sophisticated QoS facilities will be deployed in the Internet. Any
new architecture and protocols should be developed with an eye toward
these future evolutions. Extensibility mechanisms, allowing future
QoS routing and signaling protocols to "piggy-back" on top of the
basic routing system are desired.
We do require the ability to assign attributes to entities and then
do path generation and selection based on those attributes. Some may
call this QoS.
2.2.5. IP Prefix Aggregation
There is no specific requirement that CIDR-style (Classless Inter-
Domain Routing) IP prefix aggregation be done by the new
architecture. Address allocation policies, societal pressure, and
the random growth and structure of the Internet have all conspired to
make prefix aggregation extraordinarily difficult, if not impossible.
This means that large numbers of prefixes will be sloshing about in
the routing system and that forwarding tables will grow quite big.
This is a cost that we believe must be borne.
Nothing in this non-requirement should be interpreted as saying that
prefix aggregation is explicitly prohibited. CIDR-style IP prefix
aggregation might be used as a mechanism to meet other requirements,
such as scaling.
2.2.6. Perfect Safety
Making the system impossible to misconfigure is, we believe, not
required. The checking, constraints, and controls necessary to
achieve this could, we believe, prevent operators from performing
necessary tasks in the face of unforeseen circumstances.
However, safety is always a "good thing", and any results from
research in this area should certainly be taken into consideration
and, where practical, incorporated into the new routing architecture.
2.2.7. Dynamic Load Balancing
History has shown that using the routing system to perform highly
dynamic load balancing among multiple more-or-less-equal paths
usually ends up causing all kinds of instability, etc., in the
network. Thus, we do not require such a capability.
However, this is an area that is ripe for additional research, and
some believe that the capability will be necessary in the future.
Thus, the architecture and protocols should be "malleable" enough to
allow development and deployment of dynamic load-balancing
capabilities, should we ever figure out how to do it.
2.2.8. Renumbering of Hosts and Routers
We believe that the routing system is not required to "do
renumbering" of hosts and routers. That's an IP issue.
Of course, the routing and addressing architecture must be able to
deal with renumbering when it happens.
2.2.9. Host Mobility
In the Internet architecture, host mobility is handled on a per-host
basis by a dedicated, Mobile-IP protocol [RFC3344]. Traffic destined
for a mobile-host is explicitly forwarded by dedicated relay agents.
Mobile-IP [RFC3344] adequately solves the host-mobility problem and
we do not see a need for any additional requirements in this area.
Of course, the new architecture must not impede or conflict with
2.2.10. Backward Compatibility
For the purposes of development of the architecture, we assume that
there is a "clean slate". Unless specified in Section 2.1, there are
no explicit requirements that elements, concepts, or mechanisms of
the current routing architecture be carried forward into the new one.
3. Requirements from Group B
The following is the result of the work done by Sub-Group B of the
IRTF RRG in 2001-2002. It was originally released under the title:
"Future Domain Routing Requirements" and was edited by Avri Doria and
3.1. Group B - Future Domain Routing Requirements
It is generally accepted that there are major shortcomings in the
inter-domain routing of the Internet today and that these may result
in meltdown within an unspecified period of time. Remedying these
shortcomings will require extensive research to tie down the exact
failure modes that lead to these shortcomings and identify the best
techniques to remedy the situation.
Reviewer's Note: Even in 2001, there was a wide difference of
opinion across the community regarding the shortcomings of inter-
domain routing. In the years between writing and publication,
further analysis, changes in operational practice, alterations to
the demands made on inter-domain routing, modifications made to
BGP, and a recognition of the difficulty of finding a replacement
may have altered the views of some members of the community.
Changes in the nature and quality of the services that users want
from the Internet are difficult to provide within the current
framework, as they impose requirements never foreseen by the original
architects of the Internet routing system.
The kind of radical changes that have to be accommodated are
epitomized by the advent of IPv6 and the application of IP mechanisms
to private commercial networks that offer specific service guarantees
beyond the best-effort services of the public Internet. Major
changes to the inter-domain routing system are inevitable to provide
an efficient underpinning for the radically changed and increasingly
commercially-based networks that rely on the IP protocol suite.
3.2. Underlying Principles
Although inter-domain routing is seen as the major source of
problems, the interactions with intra-domain routing, and the
constraints that confining changes to the inter-domain arena would
impose, mean that we should consider the whole area of routing as an
integrated system. This is done for two reasons:
- Requirements should not presuppose the solution. A continued
commitment to the current definitions and split between inter-
domain and intra-domain routing would constitute such a
presupposition. Therefore, this part of the document uses the
name Future Domain Routing (FDR).
- It is necessary to understand the degree to which inter-domain and
intra-domain routing are related within today's routing
We are aware that using the term "domain routing" is already fraught
with danger because of possible misinterpretation due to prior usage.
The meaning of "domain routing" will be developed implicitly
throughout the document, but a little advance explicit definition of
the word "domain" is required, as well as some explanation on the
scope of "routing".
This document uses "domain" in a very broad sense, to mean any
collection of systems or domains that come under a common authority
that determines the attributes defining, and the policies
controlling, that collection. The use of "domain" in this manner is
very similar to the concept of region that was put forth by John
Wroclawski in his Metanet model [Wroclawski95]. The idea includes
the notion that certain attributes will characterize the behavior of
the systems within a domain and that there will be borders between
domains. The idea of domain presented here does not presuppose that
two domains will have the same behavior. Nor does it presuppose
anything about the hierarchical nature of domains. Finally, it does
not place restrictions on the nature of the attributes that might be
used to determine membership in a domain. Since today's routing
domains are an example of the concept of domains in this document,
there has been no attempt to create a new term.
Current practice in routing-system design stresses the need to
separate the concerns of the control plane and the forwarding plane
in a router. This document will follow this practice, but we still
use the term "routing" as a global portmanteau to cover all aspects
of the system. Specifically, however, "routing" will be used to mean
the process of discovering, interpreting, and distributing
information about the logical and topological structure of the
3.2.1. Inter-Domain and Intra-Domain
Throughout this section, the terms "intra-domain" and "inter-domain"
will be used. These should be understood as relative terms. In all
cases of domains, there will be a set of network systems that are
within that domain; routing between these systems will be termed
"intra-domain". In some cases there will be routing between domains,
which will be termed "inter-domain". It is possible that the routing
exchange between two network systems can be viewed as intra-domain
from one perspective and as inter-domain from another perspective.
3.2.2. Influences on a Changing Network
The development of the Internet is likely to be driven by a number of
changes that will affect the organization and the usage of the
- Ongoing evolution of the commercial relationships between
(connectivity) service providers, leading to changes in the way in
which peering between providers is organized and the way in which
transit traffic is routed.
- Requirements for traffic engineering within and between domains
including coping with multiple paths between domains.
- Addition of a second IP addressing technique, in the form of IPv6.
- The use of VPNs and private address space with IPv4 and IPv6.
- Evolution of the end-to-end principle to deal with the expanded
role of the Internet, as discussed in [Blumenthal01]: this paper
discusses the possibility that the range of new requirements,
especially the social and techno-political ones that are being
placed on the future, may compromise the Internet's original
design principles. This might cause the Internet to lose some of
its key features, in particular, its ability to support new and
unanticipated applications. This discussion is linked to the rise
of new stakeholders in the Internet, especially ISPs; new
government interests; the changing motivations of the ever growing
user base; and the tension between the demand for trustworthy
overall operation and the inability to trust the behavior of
- Incorporation of alternative forwarding techniques such as the
explicit routing (pipes) supplied by the MPLS [RFC3031] and GMPLS
- Integration of additional constraints into route determination
from interactions with other layers (e.g., Shared Risk Link Groups
[InferenceSRLG]). This includes the concern that redundant routes
should not fate-share, e.g., because they physically run in the
- Support for alternative and multiple routing techniques that are
better suited to delivering types of content organized in ways
other than into IP-addressed packets.
Philosophically, the Internet has the mission of transferring
information from one place to another. Conceptually, this
information is rarely organized into conveniently sized, IP-addressed
packets, and the FDR needs to consider how the information (content)
to be carried is identified, named, and addressed. Routing
techniques can then be adapted to handle the expected types of
3.2.3. High-Level Goals
This section attempts to answer two questions:
- What are we trying to achieve in a new architecture?
- Why should the Internet community care?
There is a third question that needs to be answered as well, but that
has seldom been explicitly discussed:
- How will we know when we have succeeded?
184.108.40.206. Providing a Routing System Matched to Domain Organization
Many of today's routing problems are caused by a routing system that
is not well matched to the organization and policies that it is
trying to support. Our goal is to develop a routing architecture
where even a domain organization that is not envisioned today can be
served by a routing architecture that matches its requirements. We
will know when this goal is achieved when the desired policies,
rules, and organization can be mapped into the routing system in a
natural, consistent, and easily understood way.
220.127.116.11. Supporting a Range of Different Communication Services
Today's routing protocols only support a single data forwarding
service that is typically used to deliver a best-effort service in
the public Internet. On the other hand, Diffserv for example, can
construct a number of different bit transport services within the
network. Using some of the per-domain behaviors (PDB)s that have
been discussed in the IETF, it is possible to construct services such
as Virtual Wire [DiffservVW] and Assured Rate [DiffservAR].
Providers today offer rudimentary promises about traffic handling in
the network, for example, delay and long-term packet loss guarantees.
As time goes on, this becomes even more relevant. Communicating the
service characteristics of paths in routing protocols will be
necessary in the near future, and it will be necessary to be able to
route packets according to their service requirements.
Thus, a goal of this architecture is to allow adequate information
about path service characteristics to be passed between domains and
consequently, to allow the delivery of bit transport services other
than the best-effort datagram connectivity service that is the
current common denominator.
18.104.22.168. Scalable Well Beyond Current Predictable Needs
Any proposed FDR system should scale beyond the size and performance
we can foresee for the next ten years. The previous IDR proposal as
implemented by BGP, has, with some massaging, held up for over ten
years. In that time the Internet has grown far beyond the
predictions that were implied by the original requirements.
Unfortunately, we will only know if we have succeeded in this goal if
the FDR system survives beyond its design lifetime without serious
massaging. Failure will be much easier to spot!
22.214.171.124. Alternative Forwarding Mechanisms
With the advent of circuit-based technologies (e.g., MPLS [RFC3031]
and GMPLS [RFC3471]) managed by IP routers there are forwarding
mechanisms other than the datagram service that need to be supported
by the routing architecture.
An explicit goal of this architecture is to add support for
forwarding mechanisms other then the current hop-by-hop datagram
forwarding service driven by globally unique IP addresses.
126.96.36.199. Separation of Topology Map from Connectivity Service
It is envisioned that an organization can support multiple services
within a single network. These services can, for example, be of
different quality, of different connectivity type, or of different
protocols (e.g., IPv4 and IPv6). For all these services, there may
be common domain topology, even though the policies controlling the
routing of information might differ from service to service. Thus, a
goal with this architecture is to support separation between creation
of a domain (or organization) topology map and service creation.
188.8.131.52. Separation between Routing and Forwarding
The architecture of a router is composed of two main separable parts:
control and forwarding. These components, while inter-dependent,
perform functions that are largely independent of each other.
Control (routing, signaling, and management) is typically done in
software while forwarding typically is done with specialized ASICs or
The nature of an IP-based network today is that control and data
protocols share the same network and forwarding regime. This may not
always be the case in future networks, and we should be careful to
avoid building in this sharing as an assumption in the FDR.
A goal of this architecture is to support full separation of control
and forwarding, and to consider what additional concerns might be
properly considered separately (e.g., adjacency management).
184.108.40.206. Different Routing Paradigms in Different Areas of the Same
A number of routing paradigms have been used or researched, in
addition to the conventional shortest-path-by-hop-count paradigm that
is the current mainstay of the Internet. In particular, differences
in underlying transport networks may mean that other kinds of routing
are more relevant, and the perceived need for traffic engineering
will certainly alter the routing chosen in various domains.
Explicitly, one of these routing paradigms should be the current
routing paradigm, so that the new paradigms will inter-operate in a
backward-compatible way with today's system. This will facilitate a
migration strategy that avoids flag days.
220.127.116.11. Protection against Denial-of-Service and Other Security
Currently, existence of a route to a destination effectively implies
that anybody who can get a packet onto the network is entitled to use
that route. While there are limitations to this generalization, this
is a clear invitation to denial-of-service attacks. A goal of the
FDR system should be to allow traffic to be specifically linked to
whole or partial routes so that a destination or link resources can
be protected from unauthorized use.
Editors' Note: When sections like this one and the previous ones
on quality differentiation were written, the idea of separating
traffic for security or quality was considered an unqualified
advantage. Today, however, in the midst of active discussions on
Network Neutrality, it is clear that such issues have a crucial
policy component that also needs to be understood. These, and
other similar issues, are open to further research.
18.104.22.168. Provable Convergence with Verifiable Policy Interaction
It has been shown both analytically, by Griffin, et al. (see
[Griffin99]), and practically (see [RFC3345]) that BGP will not
converge stably or is only meta-stable (i.e., will not re-converge in
the face of a single failure) when certain types of policy constraint
are applied to categories of network topology. The addition of
policy to the basic distance-vector algorithm invalidates the proofs
of convergence that could be applied to a policy-free implementation.
It has also been argued that global convergence may no longer be a
necessary goal and that local convergence may be all that is
A goal of the FDR should be to achieve provable convergence of the
protocols used that may involve constraining the topologies and
domains subject to convergence. This will also require vetting the
policies imposed to ensure that they are compatible across domain
boundaries and result in a consistent policy set.
Editors' Note: This requirement is very optimistic in that it
implies that it is possible to get operators to cooperate even it
is seen by them to be against their business practices. Though
perhaps Utopian, this is a good goal.
22.214.171.124. Robustness Despite Errors and Failures
From time to time in the history of the Internet, there have been
occurrences where misconfigured routers have destroyed global
A goal of the FDR is to be more robust to configuration errors and
failures. This should probably involve ensuring that the effects of
misconfiguration and failure can be confined to some suitable
locality of the failure or misconfiguration.
126.96.36.199. Simplicity in Management
The policy work ([rap-charter02], [snmpconf-charter02], and
[policy-charter02]) that has been done at IETF provides an
architecture that standardizes and simplifies management of QoS.
This kind of simplicity is needed in a Future Domain Routing
architecture and its protocols.
A goal of this architecture is to make configuration and management
of inter-domain routing as simple as possible.
Editors' Note: Snmpconf and rap are the hopes of the past. Today,
configuration and policy hope is focused on netconf
188.8.131.52. The Legacy of RFC 1126
RFC 1126 outlined a set of requirements that were used to guide the
development of BGP. While the network has changed in the years since
1989, many of the same requirements remain. A future domain routing
solution has to support, as its base requirement, the level of
function that is available today. A detailed discussion of RFC 1126
and its requirements can be found in [RFC5773]. Those requirements,
while specifically spelled out in that document, are subsumed by the
requirements in this document.
3.3. High-Level User Requirements
This section considers the requirements imposed by the target
audience of the FDR both in terms of organizations that might own
networks that would use FDR, and the human users who will have to
interact with the FDR.
3.3.1. Organizational Users
The organizations that own networks connected to the Internet have
become much more diverse since RFC 1126 [RFC1126] was published. In
particular, major parts of the network are now owned by commercial
service provider organizations in the business of making profits from
carrying data traffic.
184.108.40.206. Commercial Service Providers
The routing system must take into account the commercial service
provider's need for secrecy and security, as well as allowing them to
organize their business as flexibly as possible.
Service providers will often wish to conceal the details of the
network from other connected networks. So far as is possible, the
routing system should not require the service providers to expose
more details of the topology and capability of their networks than is
Many service providers will offer contracts to their customers in the
form of Service Level Agreements (SLAs). The routing system must
allow the providers to support these SLAs through traffic engineering
and load balancing as well as multi-homing, providing the degree of
resilience and robustness that is needed.
Service providers can be categorized as:
- Global Service Providers (GSPs) whose networks have a global
reach. GSPs may, and usually will, wish to constrain traffic
between their customers to run entirely on their networks. GSPs
will interchange traffic at multiple peering points with other
GSPs, and they will need extensive policy-based controls to
control the interchange of traffic. Peering may be through the
use of dedicated private lines between the partners or,
increasingly, through Internet Exchange Points.
- National, or regional, Service Providers (NSPs) that are similar
to GSPs but typically cover one country. NSPs may operate as a
federation that provides similar reach to a GSP and may wish to be
able to steer traffic preferentially to other federation members
to achieve global reach.
- Local Internet Service Providers (ISPs) operate regionally. They
will typically purchase transit capacity from NSPs or GSPs to
provide global connectivity, but they may also peer with
neighboring, and sometimes distant, ISPs.
The routing system should be sufficiently flexible to accommodate the
continually changing business relationships of the providers and the
various levels of trustworthiness that they apply to customers and
Service providers will need to be involved in accounting for Internet
usage and monitoring the traffic. They may be involved in government
action to tax the usage of the Internet, enforce social mores and
intellectual property rules, or apply surveillance to the traffic to
detect or prevent crime.
The leaves of the network domain graph are in many cases networks
supporting a single enterprise. Such networks cover an enormous
range of complexity. Some multi-national companies own networks that
rival the complexity and reach of a GSP, whereas many fall into the
Small Office-Home Office (SOHO) category. The routing system should
allow simple and robust configuration and operation for the SOHO
category, while effectively supporting the larger enterprise.
Enterprises are particularly likely to lack the capability to
configure and manage a complex routing system, and every effort
should be made to provide simple configuration and operation for such
Enterprises will also need to be able to change their service
provider with ease. While this is predominantly a naming and
addressing issue, the routing system must be able to support seamless
changeover; for example, if the changeover requires a change of
address prefix, the routing system must be able to cope with a period
when both sets of addresses are in use.
Enterprises will wish to be able to multi-home to one or more
providers as one possible means of enhancing the resilience of their
Enterprises will also frequently need to control the trust that they
place both in workers and external connections through firewalls and
similar mid-boxes placed at their external connections.
220.127.116.11. Domestic Networks
Increasingly domestic, i.e., non-business home, networks are likely
to be 'always on' and will resemble SOHO enterprises networks with no
special requirements on the routing system.
The routing system must also continue to support dial-up users.
18.104.22.168. Internet Exchange Points
Peering of service providers, academic networks, and larger
enterprises is happening increasingly at specific Internet Exchange
Points where many networks are linked together in a relatively small
physical area. The resources of the exchange may be owned by a
trusted third party or owned jointly by the connecting networks. The
routing systems should support such exchange points without requiring
the exchange point to either operate as a superior entity with every
connected network logically inferior to it or by requiring the
exchange point to be a member of one (or all) connected networks.
The connecting networks have to delegate a certain amount of trust to
the exchange point operator.
22.214.171.124. Content Providers
Content providers are at one level a special class of enterprise, but
the desire to deliver content efficiently means that a content
provider may provide multiple replicated origin servers or caches
across a network. These may also be provided by a separate content
delivery service. The routing system should facilitate delivering
content from the most efficient location.
3.3.2. Individual Users
This section covers the most important human users of the FDR and
their expected interactions with the system.
126.96.36.199. All End Users
The routing system must continue to deliver the current global
connectivity service (i.e., any unique address to any other unique
address, subject to policy constraints) that has always been the
basic aim of the Internet.
End user applications should be able to request, or have requested on
their behalf by agents and policy mechanisms, end-to-end
communication services with QoS characteristics different from the
best-effort service that is the foundation of today's Internet. It
should be possible to request both a single service channel and a
bundle of service channels delivered as a single entity.
188.8.131.52. Network Planners
The routing system should allow network planners to plan and
implement a network that can be proved to be stable and will meet
their traffic engineering requirements.
184.108.40.206. Network Operators
The routing system should, so far as is possible, be simple to
configure, operate and troubleshoot, behave in a predictable and
stable fashion, and deliver appropriate statistics and events to
allow the network to be managed and upgraded in an efficient and
220.127.116.11. Mobile End Users
The routing system must support mobile end users. It is clear that
mobility is becoming a predominant mode for network access.
3.4. Mandated Constraints
While many of the requirements to which the protocol must respond are
technical, some aren't. These mandated constraints are those that
are determined by conditions of the world around us. Understanding
these requirements requires an analysis of the world in which these
systems will be deployed. The constraints include those that are
- environmental factors,
- political boundaries and considerations, and
- technological factors such as the prevalence of different levels
of technology in the developed world compared to those in the
developing or undeveloped world.
3.4.1. The Federated Environment
The graph of the Internet network, with routers and other control
boxes as the nodes and communication links as the edges, is today
partitioned administratively into a large number of disjoint domains.
A common administration may have responsibility for one or more
domains that may or may not be adjacent in the graph.
Commercial and policy constraints affecting the routing system will
typically be exercised at the boundaries of these domains where
traffic is exchanged between the domains.
The perceived need for commercial confidentiality will seek to
minimize the control information transferred across these boundaries,
leading to requirements for aggregated information, abstracted maps
of connectivity exported from domains, and mistrust of supplied
The perceived desire for anonymity may require the use of zero-
knowledge security protocols to allow users to access resources
without exposing their identity.
The requirements should provide the ability for groups of peering
domains to be treated as a complex domain. These complex domains
could have a common administrative policy.
3.4.2. Working with Different Sorts of Networks
The diverse Layer 2 networks, over which the Layer 3 routing system
is implemented, have typically been operated totally independently
from the Layer 3 network and often with their own routing mechanisms.
Consideration needs to be given to the desirable degree and nature of
interchange of information between the layers. In particular, the
need for guaranteed robustness through diverse routing layers implies
knowledge of the underlying networks.
Mobile access networks may also impose extra requirements on Layer 3
3.4.3. Delivering Resilient Service
The routing system operates at Layer 3 in the network. To achieve
robustness and resilience at this layer requires that, where multiple
diverse routes are employed as part of delivering the resilience, the
routing system at Layer 3 needs to be assured that the Layer 2 and
lower routes are really diverse. The "diamond problem" is the
simplest form of this problem -- a Layer 3 provider attempting to
provide diversity buys Layer 2 services from two separate providers
who in turn buy Layer 1 services from the same provider:
Layer 3 service
Layer 2 Layer 2
Provider A Provider B
Layer 1 Provider
Now, when the backhoe cuts the trench, the Layer 3 provider has no
resilience unless he had taken special steps to verify that the
trench wasn't common. The routing system should facilitate avoidance
of this kind of trap.
Some work is going on to understand the sort of problems that stem
from this requirement, such as the work on Shared Risk Link Groups
[InferenceSRLG]. Unfortunately, the full generality of the problem
requires diversity be maintained over time between an arbitrarily
large set of mutually distrustful providers. For some cases, it may
be sufficient for diversity to be checked at provisioning or route
instantiation time, but this remains a hard problem requiring
3.4.4. When Will the New Solution Be Required?
There is a full range of opinion on this subject. An informal survey
indicates that the range varies from 2 to 6 years. And while there
are those, possibly outliers, who think there is no need for a new
routing architecture as well as those who think a new architecture
was needed years ago, the median seems to lie at around 4 years. As
in all projections of the future, this is not provable at this time.
Editors' Note: The paragraph above was written in 2002, yet could
be written without change in 2006. As with many technical
predictions and schedules, the horizon has remained fixed through
In projecting the requirements for the Future Domain Routing, a
number of assumptions have been made. The requirements set out
should be consistent with these assumptions, but there are doubtless
a number of other assumptions that are not explicitly articulated
1. The number of hosts today is somewhere in the area of 100
million. With dial-in, NATs, and the universal deployment of
IPv6, this is likely to become up to 500 million users (see
[CIDR]). In a number of years, with wireless accesses and
different appliances attaching to the Internet, we are likely to
see a couple of billion (10^9) "users" on the Internet. The
number of globally addressable hosts is very much dependent on
how common NATs will be in the future.
2. NATs, firewalls, and other middle-boxes exist, and we cannot
assume that they will cease being a presence in the networks.
3. The number of operators in the Internet will probably not grow
very much, as there is a likelihood that operators will tend to
merge. However, as Internet-connectivity expands to new
countries, new operators will emerge and then merge again.
4. At the beginning of 2002, there are around 12000 registered ASs.
With current use of ASs (e.g., multi-homing) the number of ASs
could be expected to grow to 25000 in about 10 years [Broido02].
This is down from a previously reported growth rate of 51% per
year [RFC3221]. Future growth rates are difficult to predict.
Editors' Note: In the routing report table of August 2006,
the total number of ASs present in the Internet Routing Table
was 23000. In 4 years, this is substantial progress on the
prediction of 25000 ASs. Also, there are significantly more
ASs registered than are visibly active, i.e., in excess of
42000 in mid-2006. It is possible, however, that many are
being used internally.
5. In contrast to the number of operators, the number of domains is
likely to grow significantly. Today, each operator has
different domains within an AS, but this also shows in SLAs and
policies internal to the operator. Making this globally visible
would create a number of domains; 10-100 times the number of
ASs, i.e., between 100,000 and 1,000,000.
6. With more and more capacity at the edge of the network, the IP
network will expand. Today, there are operators with several
thousands of routers, but this is likely to be increased. Some
domains will probably contain tens of thousands of routers.
7. The speed of connections in the (fixed) access will technically
be (almost) unconstrained. However, the cost for the links will
not be negligible so that the apparent speed will be effectively
bounded. Within a number of years, some will have multi-gigabit
speed in the access.
8. At the same time, the bandwidth of wireless access still has a
strict upper-bound. Within the foreseeable future each user
will have only a tiny amount of resources available compared to
fixed accesses (10 kbps to 2 Mbps for a Universal Mobile
Telecommunications System (UMTS) with only a few achieving the
higher figure as the bandwidth is shared between the active
users in a cell and only small cells can actually reach this
speed, but 11 Mbps or more for wireless LAN connections). There
may also be requirements for effective use of bandwidth as low
as 2.4 Kbps or lower, in some applications.
9. Assumptions 7 and 8 taken together suggest a minimum span of
bandwidth between 2.4 kbps to 10 Gbps.
10. The speed in the backbone has grown rapidly, and there is no
evidence that the growth will stop in the coming years.
Terabit-speed is likely to be the minimum backbone speed in a
couple of years. The range of bandwidths that need to be
represented will require consideration on how to represent the
values in the protocols.
11. There have been discussions as to whether Moore's Law will
continue to hold for processor speed. If Moore's Law does not
hold, then communication circuits might play a more important
role in the future. Also, optical routing is based on circuit
technology, which is the main reason for taking "circuits" into
account when designing an FDR.
12. However, the datagram model still remains the fundamental model
for the Internet.
13. The number of peering points in the network is likely to grow,
as multi-homing becomes important. Also, traffic will become
more locally distributed, which will drive the demand for local
Editors' Note: On the other hand, peer-to-peer networking may
shift the balance in demand for local peering.
14. The FDR will achieve the same degree of ubiquity as the current
Internet and IP routing.