Tech-invite3GPPspecsGlossariesIETFRFCsGroupsSIPABNFs   Ti+   SearchTech-invite World Map Symbol

RFC 7754


Technical Considerations for Internet Service Blocking and Filtering

Part 2 of 2, p. 11 to 33
Prev RFC Part


prevText      Top      ToC       Page 11 
4.  Evaluation of Blocking Design Patterns

4.1.  Criteria for Evaluation

   To evaluate the technical implications of each of the blocking design
   patterns, we compare them based on four criteria: scope, granularity,
   efficacy, and security.

Top      Up      ToC       Page 12 
4.1.1.  Scope: What set of hosts and users are affected?

   The Internet is comprised of many distinct autonomous networks and
   applications, which means that the impact of a blocking system will
   only be within a defined topological scope.  For example, blocking
   within an access network will only affect a well-defined set of users
   (namely, those connected to the access network).  Blocking performed
   by an application provider can affect users across the entire

   Blocking systems are generally viewed as less objectionable if the
   scope of their impact is as narrow as possible while still being
   effective, and as long as the impact of the blocking is within the
   administrative realm of the policy setting entity.  As mentioned
   previously, enterprise blocking systems are commonly deployed, and
   will generally have impact on enterprise users.  However, design
   flaws in blocking systems may cause the effects of blocking to be
   overbroad.  For example, at least one service provider blocking
   content in accordance with a regulation has ended up blocking content
   for downstream service providers because it filtered routes to
   particular systems and did not distribute the original information to
   downstream service providers in other jurisdictions
   [IN-OM-filtering].  Other service providers have accidentally leaked
   such black hole routes beyond the jurisdiction [NW08].  A substantial
   amount of work has gone into BGP security to avoid such attacks, but
   deployment of such systems lags.

4.1.2.  Granularity: How specific is the blocking?  Will blocking one
        service also block others?

   Internet applications are built out of a collection of loosely
   coupled components or "layers".  Different layers serve different
   purposes and rely on or offer different functions such as routing,
   transport, and naming (see [RFC1122], especially Section 1.1.3).  The
   functions at these layers are developed autonomously and almost
   always operated by different parties.  For example, in many networks,
   physical and link-layer connectivity is provided by an "access
   provider", IP routing is performed by an "Internet service provider,"
   and application-layer services are provided by completely separate
   entities (e.g., web servers).  Upper-layer protocols and applications
   rely on combinations of lower-layer functions in order to work.
   Functionality at higher layers tends to be more specialized, so that
   many different specialized applications can make use of the same
   generic underlying network functions.

   As a result of this structure, actions taken at one layer can affect
   functionality or applications at other layers.  For example,
   manipulating routing or naming functions to restrict access to a

Top      Up      ToC       Page 13 
   narrow set of resources via specific applications will likely affect
   all applications that depend on those functions.  As with the scope
   criteria, blocking systems are generally viewed as less objectionable
   when they are highly granular and do not cause collateral damage to
   content or services unrelated to the target of the blocking

   Even within the application layer, the granularity of blocking can
   vary depending on how targeted the blocking system is designed to be.
   Blocking all traffic associated with a particular application
   protocol is less granular than blocking only traffic associated with
   a subset of application instances that make use of that protocol.
   Sophisticated heuristics that make use of information about the
   application protocol, lower-layer protocols, payload signatures,
   source and destination addresses, inter-packet timing, packet sizes,
   and other characteristics are sometimes used to narrow the subset of
   traffic to be blocked.

4.1.3.  Efficacy: How easy is it for a resource or service to avoid
        being blocked?

   Although blocking a resource or service might have some immediate
   effect, efficacy must be evaluated in terms of whether it is easy to
   circumvent.  Simply doing a one-time policy is often unlikely to have
   lasting efficacy (e.g., see [CleanFeed] and [BlackLists14]).
   Experience has shown that, in general, blacklisting requires
   continual maintenance of the blacklist itself, both to add new
   entries for unwanted traffic and deleting entries when offending
   content is removed.  Experience also shows that, depending on the
   nature of the block, it may be difficult to determine when to
   unblock.  For instance, if a host is blocked because it has been
   compromised and used as a source of attack, it may not be plainly
   evident when that site has been fixed.

   For blacklist-style blocking, the distributed and mobile nature of
   Internet resources limits the effectiveness of blocking actions.  A
   service that is blocked in one jurisdiction can often be moved or re-
   instantiated in another jurisdiction (see, for example,
   [Malicious-Resolution]).  Likewise, services that rely on blocked
   resources can often be rapidly reconfigured to use non-blocked
   resources.  If a web site is prevented from using a domain name or
   set of IP addresses, the content can simply be moved to another
   domain name or network, or use alternate syntaxes to express the same
   resource name (see the discussion of false negatives in [RFC6943]).

Top      Up      ToC       Page 14 
   In a process known as "snowshoe spamming," a spam originator uses
   addresses in many different networks as sources for spam.  This
   technique is already widely used to spread spam generation across a
   variety of resources and jurisdictions to prevent spam blocking from
   being effective.

   In the presence of either blacklist or whitelist systems, there are
   several ways in which a user or application can try to circumvent the

   The users may choose to use different sets of protocols or otherwise
   alter their traffic characteristics to circumvent the filters.  In
   some cases, applications may shift their traffic to port 80 or 443
   when other ports are blocked.  Or, services may be tunneled within
   other services, proxied by a collaborating external host (e.g., an
   anonymous redirector), or simply run over an alternate port (e.g.,
   port 8080 vs port 80 for HTTP).  Another means of circumvention is
   alteration of the service behavior to use a dynamic port negotiation
   phase, in order to avoid use of a constant port address.

   One of the primary motivations for arguing that HTTP/2 should be
   encrypted by default was that unencrypted HTTP 1.1 traffic was
   sometimes blocked or improperly processed.  Users or applications
   shifting their traffic to encrypted HTTP has the effect of
   circumventing filters that depend on the HTTP plaintext payload.

   If voice communication based on SIP [RFC3261] is blocked, users are
   likely to use applications which use proprietary protocols that allow
   them to talk to each other.

   Some filtering systems are only capable of identifying IPv4 traffic
   and therefore, by shifting to IPv6, users may be able to evade
   filtering.  Using IPv6 with header options, using multiple layers of
   tunnels, or using encrypted tunnels can also make it more challenging
   for blocking systems to find transport ports within packets, making
   port-based blocking more difficult.  Thus, distribution and mobility
   can hamper efforts to block communications in a number of ways.

4.1.4.  Security: How does the blocking impact existing trust

   Modern security mechanisms rely on trusted hosts communicating via a
   secure channel without intermediary interference.  Protocols such as
   Transport Layer Security (TLS) [RFC5246] and IPsec [RFC4301] are
   designed to ensure that each endpoint of the communication knows the
   identity of the other endpoint(s) and that only the endpoints of the
   communication can access the secured contents of the communication.
   For example, when a user connects to a bank's web site, TLS ensures

Top      Up      ToC       Page 15 
   that the user's banking information is securely communicated to the
   bank and nobody else, ensuring the data remains confidential while in

   Some blocking strategies require intermediaries to insert themselves
   within the end-to-end communications path, potentially breaking
   security properties of Internet protocols [RFC4924].  In these cases,
   it can be difficult or impossible for endpoints to distinguish
   between attackers and "authorized" parties conducting blocking.  For
   example, an enterprise firewall administrator could gain access to
   users' personal bank accounts when users on the enterprise network
   connect to bank web sites.

   Finally, one needs to evaluate whether a blocking mechanism can be
   used by an end user to efficiently locate blocked resources that can
   then be accessed via other mechanisms that circumvent the blocking
   mechanism.  For example, Clayton [CleanFeed] showed how special
   treatment in one blocking system could be detected by end users in
   order to efficiently locate illegal web sites, which was thus
   counterproductive to the policy objective of the blocking mechanism.

4.2.  Network-Based Blocking

   Being able to block access to resources without the consent or
   cooperation of either endpoint is viewed as a desirable feature by
   some that deploy blocking systems.  Systems that have this property
   are often implemented using intermediary devices in the network, such
   as firewalls or filtering systems.  These systems inspect traffic as
   it passes through the network, decide based on the characteristics or
   content of a given communication whether it should be blocked, and
   then block or allow the communication as desired.  For example, web
   filtering devices usually inspect HTTP requests to determine the URI
   being requested, compare that URI to a list of blacklisted or
   whitelisted URIs, and allow the request to proceed only if it is
   permitted by policy.  Firewalls perform a similar function for other
   classes of traffic in addition to HTTP.  Some blocking systems focus
   on specific application-layer traffic, while others, such as router
   Access Control Lists (ACLs), filter traffic based on lower-layer
   criteria (transport protocol and source or destination addresses or

   Intermediary systems used for blocking are often not far from the
   edge of the network.  For example, many enterprise networks operate
   firewalls that block certain web sites, as do some residential ISPs.
   In some cases, this filtering is done with the consent or cooperation
   of the affected endpoints.  PCs within an enterprise, for example,
   might be configured to trust an enterprise proxy, a residential ISP
   might offer a "safe browsing" service, or mail clients might

Top      Up      ToC       Page 16 
   authorize mail servers on the local network to filter spam on their
   behalf.  These cases share some of the properties of the "Endpoint-
   Based Blocking" scenarios discussed in Section 4.4 below, since the
   endpoint has made an informed decision to authorize the intermediary
   to block on its behalf and is therefore unlikely to attempt to
   circumvent the blocking.  From an architectural perspective, however,
   they may create many of the same problems as network-based filtering
   conducted without consent.

4.2.1.  Scope

   In the case of government-initiated blocking, network operators
   subject to a specific jurisdiction may be required to block or
   filter.  Thus, it is possible for laws to be structured to result in
   blocking by imposing obligations on the operators of networks within
   a jurisdiction, either via direct government action or by allowing
   private actors to demand blocking (e.g., through lawsuits).

   Regardless of who is responsible for a blocking policy, enforcement
   can be done using Stateless Packet Filtering, Stateful Packet
   Filtering, or Deep Packet Inspection as defined in Section 2.  While
   network-based Stateless Packet Filtering has granularity issues
   discussed in Section 4.2.2, network-based Stateful Packet Filtering
   and Deep Packet Inspection approaches often run into several
   technical issues that limit their viability in practice.  For
   example, many issues arise from the fact that an intermediary needs
   to have access to a sufficient amount of traffic to make its blocking

   For residential or consumer networks with many egress points, the
   first step to obtaining this traffic is simply gaining access to the
   constituent packets.  The Internet is designed to deliver packets
   independently from source to destination -- not to any particular
   point along the way.  Thus, the sequence of packets from the sender
   can only be reliably reconstructed at the intended receiver.  In
   addition, inter-network routing is often asymmetric, and for
   sufficiently complex local networks, intra-network traffic flows can
   be asymmetric as well [asymmetry].  Thus, packets in the reverse
   direction use a different sent of paths than the forward direction.

   This asymmetry means that an intermediary in a network with many
   egress points may, depending on topology and configuration, see only
   one half of a given communication, which may limit the scope of the
   communications that it can filter.  For example, a filter aimed at
   requests destined for particular URIs cannot make accurate blocking
   decisions based on the URI if it is only in the data path for HTTP
   responses and not requests, since the URI is not included in the
   responses.  Asymmetry may be surmountable given a filtering system

Top      Up      ToC       Page 17 
   with enough distributed, interconnected filtering nodes that can
   coordinate information about flows belonging to the same
   communication or transaction, but depending on the size of the
   network this may imply significant complexity in the filtering
   system.  Routing can sometimes be forced to be symmetric within a
   given network using routing configuration, NAT, or Layer 2 mechanisms
   (e.g., MPLS), but these mechanisms are frequently brittle, complex,
   and costly -- and can sometimes result in reduced network performance
   relative to asymmetric routing.  Enterprise networks may also be less
   susceptible to these problems if they route all traffic through a
   small number of egress points.

4.2.2.  Granularity

   Once an intermediary in a network has access to traffic, it must
   identify which packets must be filtered.  This decision is usually
   based on some combination of information at the network layer (e.g.,
   IP addresses), transport layer (ports), or application layer (URIs or
   other content).  Deep Packet Inspection type blocking based on
   application-layer attributes can be potentially more granular and
   less likely to cause collateral damage than blocking all traffic
   associated with a particular address, which can impact unrelated
   occupants of the same address.  However, more narrowly focused
   targeting may be more complex, less efficient, or easier to
   circumvent than filtering that sweeps more broadly, and those who
   seek to block must balance these attributes against each other when
   choosing a blocking system.

4.2.3.  Efficacy and Security

   Regardless of the layer at which blocking occurs, it may be open to
   circumvention, particularly in cases where network endpoints have not
   authorized the blocking.  The communicating endpoints can deny the
   intermediary access to attributes at any layer by using encryption
   (see below).  IP addresses must be visible, even if packets are
   protected with IPsec, but blocking based on IP addresses can be
   trivial to circumvent.  A filtered site may be able to quickly change
   its IP address using only a few simple steps: changing a single DNS
   record and provisioning the new address on its server or moving its
   services to the new address [BT-TPB].

   Indeed, Poort, et al. [Poort] found that "any behavioural change in
   response to blocking access to The Pirate Bay has had no lasting net
   impact on the overall number of downloaders from illegal sources, as
   new consumers have started downloading from illegal sources and
   people learn to circumvent the blocking while new illegal sources may
   be launched, causing file sharing to increase again", and that these

Top      Up      ToC       Page 18 
   results "are in line with a tendency found in the literature that any
   effects of legal action against file sharing often fade out after a
   period of typically six months."

   If application content is encrypted with a security protocol such as
   IPsec or TLS, then the intermediary will require the ability to
   decrypt the packets to examine application content, or resort to
   statistical methods to guess what the content is.  Since security
   protocols are generally designed to provide end-to-end security
   (i.e., to prevent intermediaries from examining content), the
   intermediary would need to masquerade as one of the endpoints,
   breaking the authentication in the security protocol, reducing the
   security of the users and services affected, and interfering with
   legitimate private communication.  Besides, various techniques that
   use public databases with whitelisted keys (e.g., DANE [RFC6698])
   enable users to detect these sort of intermediaries.  Those users are
   then likely to act as if the service is blocked.

   If the intermediary is unable to decrypt the security protocol, then
   its blocking determinations for secure sessions can only be based on
   unprotected attributes, such as IP addresses, protocol IDs, port
   numbers, packet sizes, and packet timing.  Some blocking systems
   today still attempt to block based on these attributes, for example
   by blocking TLS traffic to known proxies that could be used to tunnel
   through the blocking system.

   However, as the Telex project [Telex] recently demonstrated, if an
   endpoint cooperates with a relay in the network (e.g., a Telex
   station), it can create a TLS tunnel that is indistinguishable from
   legitimate traffic.  For example, if an ISP used by a banking web
   site were to operate a Telex station at one of its routers, then a
   blocking system would be unable to distinguish legitimate encrypted
   banking traffic from Telex-tunneled traffic (potentially carrying
   content that would have been filtered).

   Thus, in principle in a blacklist system it is impossible to block
   tunneled traffic through an intermediary device without blocking all
   secure traffic from that system.  (The only limitation in practice is
   the requirement for special software on the client.)  Those who
   require that secure traffic be blocked from such sites risk blocking
   content that would be valuable to their users, perhaps impeding
   substantial economic activity.  Conversely, those who are hosting a
   myriad of content have an incentive to see that law abiding content
   does not end up being blocked.

Top      Up      ToC       Page 19 
   Governments and network operators should, however, take care not to
   encourage the use of insecure communications in the naming of
   security, as doing so will invariably expose their users to the
   various attacks that the security protocols were put in place to

   Some operators may assume that only blocking access to resources
   available via unsecure channels is sufficient for their purposes --
   i.e., that the size of the user base that will be willing to use
   secure tunnels and/or special software to circumvent the blocking is
   low enough to make blocking via intermediaries worthwhile.  Under
   that assumption, one might decide that there is no need to control
   secure traffic and thus that network-based blocking is an attractive

   However, the longer such blocking systems are in place, the more
   likely it is that efficient and easy-to-use tunneling tools will
   become available.  The proliferation of the Tor network, for example,
   and its increasingly sophisticated blocking-avoidance techniques
   demonstrate that there is energy behind this trend [Tor].  Thus,
   network-based blocking becomes less effective over time.

   Network-based blocking is a key contributor to the arms race that has
   led to the development of such tools, the result of which is to
   create unnecessary layers of complexity in the Internet.  Before
   content-based blocking became common, the next best option for
   network operators was port blocking, the widespread use of which has
   driven more applications and services to use ports (80 and 443 most
   commonly) that are unlikely to be blocked.  In turn, network
   operators shifted to finer-grained content blocking over port 80,
   content providers shifted to encrypted channels, and operators began
   seeking to identify those channels (although doing so can be
   resource-prohibitive, especially if tunnel endpoints begin to change
   frequently).  Because the premise of network-based blocking is that
   endpoints have incentives to circumvent it, this cat-and-mouse game
   is an inevitable by-product of this form of blocking.

   One reason above all stands as an enormous challenge to network-based
   blocking: the Internet was designed with the premise that people will
   want to connect and communicate.  IP will run on anything up to and
   including carrier pigeons [RFC1149].  It often runs atop TLS and has
   been made to run on other protocols that themselves run atop IP.
   Because of this fundamental layering approach, nearly any authorized
   avenue of communication can be used as a transport.  This same
   "problem" permits communications to succeed in the most challenging
   of environments.

Top      Up      ToC       Page 20 
4.2.4.  Summary

   In sum, network-based blocking is only effective in a fairly
   constrained set of circumstances.  First, the traffic needs to flow
   through the network in such a way that the intermediary device has
   access to any communications it intends to block.  Second, the
   blocking system needs an out-of-band mechanism to mitigate the risk
   of secure protocols being used to avoid blocking (e.g., human
   analysts identifying IP addresses of tunnel endpoints).  If the
   network is sufficiently complex, or the risk of tunneling too high,
   then network-based blocking is unlikely to be effective, and in any
   case this type of blocking drives the development of increasingly
   complex layers of circumvention.  Network-based blocking can be done
   without the cooperation of either endpoint to a communication, but it
   has the serious drawback of breaking end-to-end security assurances
   in some cases.  The fact that network-based blocking is premised on
   this lack of cooperation results in arms races that increase the
   complexity of both application design and network design.

4.3.  Rendezvous-Based Blocking

   Internet applications often require or rely on support from common,
   global rendezvous services, including the DNS, certificate
   authorities, search engines, WHOIS databases, and Internet Route
   Registries.  These services control or register the structure and
   availability of Internet applications by providing data elements that
   are used by application code.  Some applications also have their own
   specialized rendezvous services.  For example, to establish an end-
   to-end SIP call, the end-nodes (terminals) rely on presence and
   session information supplied by SIP servers.

   Global rendezvous services are comprised of generic technical
   databases intended to record certain facts about the network.  The
   DNS, for example, stores information about which servers provide
   services for a given name, and the Resource Public Key Infrastructure
   (RPKI) stores information about which organizations have been
   allocated IP addresses.  To offer specialized Internet services and
   applications, different people rely on these generic records in
   different ways.  Thus, the effects of changes to the databases can be
   much more difficult to predict than, for example, the effect of
   shutting down a web server (which fulfills the specific purpose of
   serving web content).

   Although rendezvous services are discussed as a single category, the
   precise characteristics and implications of blocking each kind of
   rendezvous service are slightly different.  This section provides
   examples to highlight these differences.

Top      Up      ToC       Page 21 
4.3.1.  Scope

   In the case of government-initiated blocking, the operators of
   servers used to provide rendezvous service that are subject to a
   specific jurisdiction may be required to block or filter.  Thus, it
   is possible for laws to be structured to result in blocking by
   imposing obligations on the operators of rendezvous services within a
   jurisdiction, either via direct government action or by allowing
   private actors to demand blocking (e.g., through lawsuits).

   The scope of blocking conducted by others will depend on which
   servers they can access.  For example, network operators and
   enterprises may be capable of conducting blocking using their own DNS
   resolvers or application proxies within their networks, but not
   authoritative servers controlled by others.

   However, if a service is hosted and operated within a jurisdiction
   where it is considered legitimate, then blocking access at a global
   rendezvous service (e.g., one within a jurisdiction where it is
   considered illegitimate) might deny services in jurisdictions where
   they are considered legitimate.  This type of collateral damage is
   lessened when blocking is done at a local rendezvous server that only
   has local impact, rather than at a global rendezvous server with
   global impact.

4.3.2.  Granularity

   Blocking at a global rendezvous service can be overbroad if the
   resources blocked support multiple services, since blocking service
   can cause collateral damage to legitimate uses of other services.
   For example, a given address or domain name might host both
   legitimate services as well as services that some would desire to

4.3.3.  Efficacy

   The distributed nature of the Internet limits the efficacy of
   blocking based on rendezvous services.  If the Internet community
   realizes that a blocking decision has been made and wishes to counter
   it, then local networks can "patch" the authoritative data that a
   global rendezvous service provides to avoid the blocking (although
   the development of DNSSEC and the RPKI are causing this to change by
   requiring updates to be authorized).  In the DNS case, registrants
   whose names get blocked can relocate their resources to different

Top      Up      ToC       Page 22 
   Endpoints can also choose not to use a particular rendezvous service.
   They might switch to a competitor or use an alternate mechanism (for
   example, IP literals in URIs to circumvent DNS filtering).

4.3.4.  Security and Other Implications

   Blocking of global rendezvous services also has a variety of other
   implications that may reduce the stability, accessibility, and
   usability of the global Internet.  Infrastructure-based blocking may
   erode the trust in the general Internet and encourage the development
   of parallel or "underground" infrastructures causing forms of
   Internet fragmentation, for example.  This risk may become more acute
   as the introduction of security infrastructures and mechanisms such
   as DNSSEC and RPKI "hardens" the authoritative data -- including
   blocked names or routes -- that the existing infrastructure services
   provide.  Those seeking to circumvent the blocks may opt to use less-
   secure but unblocked parallel services.  As applied to the DNS, these
   considerations are further discussed in RFC 2826 [RFC2826], in the
   advisory [SAC-056] from ICANN's Security and Stability Advisory
   Committee (SSAC), and in the Internet Society's whitepaper on DNS
   filtering [ISOCFiltering], but they also apply to other global
   Internet resources.

4.3.5.  Examples

   Below we provide a few specific examples for routing, DNS, and WHOIS
   services.  These examples demonstrate that for these types of
   rendezvous services (services that are often considered a global
   commons), jurisdiction-specific legal and ethical motivations for
   blocking can both have collateral effects in other jurisdictions and
   be circumvented because of the distributed nature of the Internet.

   In 2008, Pakistan Telecom attempted to deny access to YouTube within
   Pakistan by announcing bogus routes for YouTube address space to
   peers in Pakistan.  YouTube was temporarily denied service on a
   global basis as a result of a route leak beyond the Pakistani ISP's
   scope, but service was restored in approximately two hours because
   network operators around the world reconfigured their routers to
   ignore the bogus routes [RenesysPK].  In the context of SIDR and
   secure routing, a similar reconfiguration could theoretically be done
   if a resource certificate were to be revoked in order to block
   routing to a given network.

   In the DNS realm, one of the recent cases of U.S. law enforcement
   seizing domain names involved RojaDirecta, a Spanish web site.  Even
   though several of the affected domain names belonged to Spanish
   organizations, they were subject to blocking by the U.S. government
   because certain servers were operated in the United States.

Top      Up      ToC       Page 23 
   Government officials required the operators of the parent zones of a
   target name (e.g., "com" for "") to direct queries for
   that name to a set of U.S.-government-operated name servers.  Users
   of other services (e.g., email) under a target name would thus be
   unable to locate the servers providing services for that name,
   denying them the ability to access these services.

   Similar workarounds as those that were used in the Pakistan Telecom
   case are also available in the DNS case.  If a domain name is blocked
   by changing authoritative records, network operators can restore
   service simply by extending TTLs on cached pre-blocking records in
   recursive resolvers, or by statically configuring resolvers to return
   unblocked results for the affected name.  However, depending on the
   availability of valid signature data, these types of workarounds will
   not work with DNSSEC-signed data.

   The action of the Dutch authorities against the RIPE NCC, where RIPE
   was ordered to freeze the accounts of Internet resource holders, is
   of a similar character.  By controlling the account holders' WHOIS
   information, this type of action limited the ability of the ISPs in
   question to manage their Internet resources.  This example is
   slightly different from the others because it does not immediately
   impact the ability of ISPs to provide connectivity.  While ISPs use
   (and trust) the WHOIS databases to build route filters or use the
   databases for trouble-shooting information, the use of the WHOIS
   databases for those purposes is voluntary.  Thus, seizure of this
   sort may not have any immediate effect on network connectivity, but
   it may impact overall trust in the common infrastructure.  It is
   similar to the other examples in that action in one jurisdiction can
   have broader effects, and in that the global system may encourage
   networks to develop their own autonomous solutions.

4.3.6.  Summary

   In summary, rendezvous-based blocking can sometimes be used to
   immediately block a target service by removing some of the resources
   it depends on.  However, such blocking actions can have harmful side
   effects due to the global nature of Internet resources and the fact
   that many different application-layer services rely on generic,
   global databases for rendezvous purposes.  The fact that Internet
   resources can quickly shift between network locations, names, and
   addresses, together with the autonomy of the networks that comprise
   the Internet, can mean that the effects of rendezvous-based blocking
   can be negated on short order in some cases.  For some applications,
   rendezvous services are optional to use, not mandatory.  Hence, they
   are only effective when the endpoint or the endpoint's network
   chooses to use them; they can be routed around by choosing not to use

Top      Up      ToC       Page 24 
   the rendezvous service or migrating to an alternative one.  To adapt
   a quote by John Gilmore, "The Internet treats blocking as damage and
   routes around it".

4.4.  Endpoint-Based Blocking

   Internet users and their devices constantly make decisions as to
   whether to engage in particular Internet communications.  Users
   decide whether to click on links in suspect email messages; browsers
   advise users on sites that have suspicious characteristics; spam
   filters evaluate the validity of senders and messages.  If the
   hardware and software making these decisions can be instructed not to
   engage in certain communications, then the communications are
   effectively blocked because they never happen.

   There are several systems in place today that advise user systems
   about which communications they should engage in.  As discussed
   above, several modern browsers consult with "Safe Browsing" services
   before loading a web site in order to determine whether the site
   could potentially be harmful.  Spam filtering is one of the oldest
   types of filtering in the Internet; modern filtering systems
   typically make use of one or more "reputation" or "blacklist"
   databases in order to make decisions about whether a given message or
   sender should be blocked.  These systems typically have the property
   that many filtering systems (browsers, Mail Transfer Agents (MTAs))
   share a single reputation service.  Even the absence of provisioned
   PTR records for an IP address may result in email messages not being

4.4.1.  Scope

   In an endpoint-based blocking system, blocking actions are performed
   autonomously, by individual endpoints or their delegates.  The
   effects of blocking are thus usually local in scope, minimizing the
   effects on other users or other, legitimate services.

4.4.2.  Granularity

   Endpoint-based blocking avoids some of the limitations of rendezvous-
   based blocking: while rendezvous-based blocking can only see and
   affect the rendezvous service at hand (e.g., DNS name resolution),
   endpoint-based blocking can potentially see into the entire
   application, across all layers and transactions.  This visibility can
   provide endpoint-based blocking systems with a much richer set of
   information for making narrow blocking decisions.  Support for narrow
   granularity depends on how the application protocol client and server
   are designed, however.  A typical endpoint-based firewall application

Top      Up      ToC       Page 25 
   may have less ability to make fine-grained decisions than an
   application that does its own blocking (see [RFC7288] for further

4.4.3.  Efficacy

   Endpoint-based blocking deals well with mobile adversaries.  If a
   blocked service relocates resources or uses different resources, a
   rendezvous- or network-based blocking approach may not be able to
   affect the new resources (at least not immediately).  A network-based
   blocking system may not even be able to tell whether the new
   resources are being used, if the previously blocked service uses
   secure protocols.  By contrast, endpoint-based blocking systems can
   detect when a blocked service's resources have changed (because of
   their full visibility into transactions) and adjust blocking as
   quickly as new blocking data can be sent out through a reputation

   The primary challenge to endpoint-based blocking is that it requires
   the cooperation of endpoints.  Where this cooperation is willing,
   this is a fairly low barrier, requiring only reconfiguration or
   software update.  Where cooperation is unwilling, it can be
   challenging to enforce cooperation for large numbers of endpoints.
   That challenge is exacerbated when the endpoints are a diverse set of
   static, mobile, or visiting endpoints.  If cooperation can be
   achieved, endpoint-based blocking can be much more effective than
   other approaches because it is so coherent with the Internet's
   architectural principles.

4.4.4.  Security

   Endpoint-based blocking is performed at one end of an Internet
   communication, and thus avoids the problems related to end-to-end
   security mechanisms that network-based blocking runs into and the
   challenges to global trust infrastructures that rendezvous-based
   blocking creates.

4.4.5.  Server Endpoints

   In this discussion of endpoint-based blocking, the focus has been on
   the consuming side of the end-to-end communication, mostly the client
   side of a client-server type connection.  However, similar
   considerations apply to the content-producing side of end-to-end
   communications, regardless of whether that endpoint is a server in a
   client-server connection or a peer in a peer-to-peer type of

Top      Up      ToC       Page 26 
   For instance, for blocking of web content, narrow targeting can be
   achieved through whitelisting methods like password authentication,
   whereby passwords are available only to authorized clients.  For
   example, a web site might only make adult content available to users
   who provide credit card information, which is assumed to be a proxy
   for age.

   The fact that content-producing endpoints often do not take it upon
   themselves to block particular forms of content in response to
   requests from governments or other parties can sometimes motivate
   those latter parties to engage in blocking elsewhere within the

   If a service is to be blocked, the best way of doing that is to
   disable the service at the server endpoint.

4.4.6.  Summary

   Out of the three design patterns, endpoint-based blocking is the
   least likely to cause collateral damage to Internet services or the
   overall Internet architecture.  Endpoint-based blocking systems can
   potentially see into all layers involved in a communication, allowing
   blocking to be narrowly targeted and can minimize unintended
   consequences.  Adversary mobility can be accounted for as soon as
   reputation systems are updated with new adversary information.  One
   potential drawback of endpoint-based blocking is that it requires the
   endpoint's cooperation; implementing blocking at an endpoint when it
   is not in the endpoint's interest is therefore difficult to
   accomplish because the endpoint's user can disable the blocking or
   switch to a different endpoint.

5.  Security Considerations

   The primary security concern related to Internet service blocking is
   the effect that it has on the end-to-end security model of many
   Internet security protocols.  When blocking is enforced by an
   intermediary with respect to a given communication, the blocking
   system may need to obtain access to confidentiality-protected data to
   make blocking decisions.  Mechanisms for obtaining such access often
   require the blocking system to defeat the authentication mechanisms
   built into security protocols.

   For example, some enterprise firewalls will dynamically create TLS
   certificates under a trust anchor recognized by endpoints subject to
   blocking.  These certificates allow the firewall to authenticate as
   any web site, so that it can act as a man-in-the-middle on TLS

Top      Up      ToC       Page 27 
   connections passing through the firewall.  This is not unlike an
   external attacker using compromised certificates to intercept TLS

   Modifications such as these obviously make the firewall itself an
   attack surface.  If an attacker can gain control of the firewall or
   compromise the key pair used by the firewall to sign certificates,
   the attacker will have access to the unencrypted data of all current
   and recorded TLS sessions for all users behind that firewall, in a
   way that is undetectable to users.  Besides, if the compromised key-
   pairs can be extracted from the firewall, all users, not only those
   behind the firewall, that rely on that public key are vulnerable.

   We must also consider the possibility that a legitimate administrator
   of such a firewall could gain access to privacy-sensitive
   information, such as the bank accounts or health records of users who
   access such secure sites through the firewall.  These privacy
   considerations motivate legitimate use of secure end-to-end protocols
   that often make it difficult to enforce granular blocking policies.

   When blocking systems are unable to inspect and surgically block
   secure protocols, it is tempting to completely block those protocols.
   For example, a web blocking system that is unable to inspect HTTPS
   connections might simply block any attempted HTTPS connection.
   However, since Internet security protocols are commonly used for
   critical services such as online commerce and banking, blocking these
   protocols would block access to these services as well, or worse,
   force them to be conducted over insecure communication.

   Security protocols can, of course, also be used as mechanisms for
   blocking services.  For example, if a blocking system can insert
   invalid credentials for one party in an authentication protocol, then
   the other end will typically terminate the connection based on the
   authentication failure.  However, it is typically much simpler to
   simply block secure protocols than to exploit those protocols for
   service blocking.

6.  Conclusion

   Filtering will continue to occur on the Internet.  We conclude that,
   whenever possible, filtering should be done on the endpoint.
   Cooperative endpoints are most likely to have sufficient contextual
   knowledge to effectively target blocking; hence, such blocking
   minimizes unintended consequences.  It is realistic to expect that at
   times filtering will not be done on the endpoints.  In these cases,
   promptly informing the endpoint that blocking has occurred provides
   necessary transparency to redress any errors, particularly as they
   relate to any collateral damage introduced by errant filters.

Top      Up      ToC       Page 28 
   Blacklist approaches are often a game of "cat and mouse", where those
   with the content move it around to avoid blocking.  Or, the content
   may even be naturally mirrored or cached at other legitimate sites
   such as the Internet Archive Wayback Machine [Wayback].  At the same
   time, whitelists provide similar risks because sites that had
   "acceptable" content may become targets for "unacceptable content",
   and similarly, access to perfectly inoffensive and perhaps useful or
   productive content is unnecessarily blocked.

   From a technical perspective, there are no perfect or even good
   solutions -- there is only least bad.  On that front, we posit that a
   hybrid approach that combines endpoint-based filtering with network
   filtering may prove least damaging.  An endpoint may choose to
   participate in a filtering regime in exchange for the network
   providing broader unfiltered access.

   Finally, we note that where filtering is occurring to address content
   that is generally agreed to be inappropriate or illegal, strong
   cooperation among service providers and governments may provide
   additional means to identify both the victims and the perpetrators
   through non-filtering mechanisms, such as partnerships with the
   finance industry to identify and limit illegal transactions.

7.  Informative References

              John, W., Dusi, M., and K. Claffy, "Estimating routing
              symmetry on single links by passive flow measurements",
              Proceedings of the 6th International Wireless
              Communications and Mobile Computing Conference, IWCMC '10,
              DOI 10.1145/1815396.1815506, 2010,

              Chachra, N., McCoy, D., Savage, S., and G. Voelker,
              "Empirically Characterizing Domain Abuse and the Revenue
              Impact of Blacklisting", Workshop on the Economics of
              Information Security 2014,

   [BT-TPB]   Meyer, D., "BT blocks The Pirate Bay", June 2012,

Top      Up      ToC       Page 29 
              Clayton, R., "Failures in a Hybrid Content Blocking
              System", Fifth Privacy Enhancing Technologies Workshop,
              PET 2005, DOI 10.1007/11767831_6, 2005,

              RIPE NCC, "RIPE NCC Blocks Registration in RIPE Registry
              Following Order from Dutch Police", 2012,

              Citizen Lab, "Routing Gone Wild: Documenting upstream
              filtering in Oman via India", July 2012,

              Internet Society, "DNS: Finding Solutions to Illegal
              On-line Activities", 2012,

              Dagon, D., Provos, N., Lee, C., and W. Lee, "Corrupted DNS
              Resolution Paths: The Rise of a Malicious Resolution
              Authority", 2008,

   [Morris]   Kehoe, B., "The Robert Morris Internet Worm", 1992,

   [NW08]     Marsan, C., "YouTube/Pakistan incident: Could something
              similar whack your site?", Network World, March 2008,

   [Poort]    Poort, J., Leenheer, J., van der Ham, J., and C. Dumitru,
              "Baywatch: Two approaches to measure the effects of
              blocking access to The Pirate Bay", Telecommunications
              Policy 38:383-392, DOI 10.1016/j.telpol.2013.12.008, 2014,

Top      Up      ToC       Page 30 
              Brown, M., "Pakistan hijacks YouTube", February 2008,

   [RFC1122]  Braden, R., Ed., "Requirements for Internet Hosts -
              Communication Layers", STD 3, RFC 1122,
              DOI 10.17487/RFC1122, October 1989,

   [RFC1149]  Waitzman, D., "Standard for the transmission of IP
              datagrams on avian carriers", RFC 1149,
              DOI 10.17487/RFC1149, April 1990,

   [RFC2826]  Internet Architecture Board, "IAB Technical Comment on the
              Unique DNS Root", RFC 2826, DOI 10.17487/RFC2826, May
              2000, <>.

   [RFC2827]  Ferguson, P. and D. Senie, "Network Ingress Filtering:
              Defeating Denial of Service Attacks which employ IP Source
              Address Spoofing", BCP 38, RFC 2827, DOI 10.17487/RFC2827,
              May 2000, <>.

   [RFC2979]  Freed, N., "Behavior of and Requirements for Internet
              Firewalls", RFC 2979, DOI 10.17487/RFC2979, October 2000,

   [RFC3261]  Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston,
              A., Peterson, J., Sparks, R., Handley, M., and E.
              Schooler, "SIP: Session Initiation Protocol", RFC 3261,
              DOI 10.17487/RFC3261, June 2002,

   [RFC3704]  Baker, F. and P. Savola, "Ingress Filtering for Multihomed
              Networks", BCP 84, RFC 3704, DOI 10.17487/RFC3704, March
              2004, <>.

   [RFC4033]  Arends, R., Austein, R., Larson, M., Massey, D., and S.
              Rose, "DNS Security Introduction and Requirements",
              RFC 4033, DOI 10.17487/RFC4033, March 2005,

   [RFC4084]  Klensin, J., "Terminology for Describing Internet
              Connectivity", BCP 104, RFC 4084, DOI 10.17487/RFC4084,
              May 2005, <>.

Top      Up      ToC       Page 31 
   [RFC4301]  Kent, S. and K. Seo, "Security Architecture for the
              Internet Protocol", RFC 4301, DOI 10.17487/RFC4301,
              December 2005, <>.

   [RFC4924]  Aboba, B., Ed. and E. Davies, "Reflections on Internet
              Transparency", RFC 4924, DOI 10.17487/RFC4924, July 2007,

   [RFC4948]  Andersson, L., Davies, E., and L. Zhang, "Report from the
              IAB workshop on Unwanted Traffic March 9-10, 2006",
              RFC 4948, DOI 10.17487/RFC4948, August 2007,

   [RFC4949]  Shirey, R., "Internet Security Glossary, Version 2",
              FYI 36, RFC 4949, DOI 10.17487/RFC4949, August 2007,

   [RFC5246]  Dierks, T. and E. Rescorla, "The Transport Layer Security
              (TLS) Protocol Version 1.2", RFC 5246,
              DOI 10.17487/RFC5246, August 2008,

   [RFC5782]  Levine, J., "DNS Blacklists and Whitelists", RFC 5782,
              DOI 10.17487/RFC5782, February 2010,

   [RFC6480]  Lepinski, M. and S. Kent, "An Infrastructure to Support
              Secure Internet Routing", RFC 6480, DOI 10.17487/RFC6480,
              February 2012, <>.

   [RFC6698]  Hoffman, P. and J. Schlyter, "The DNS-Based Authentication
              of Named Entities (DANE) Transport Layer Security (TLS)
              Protocol: TLSA", RFC 6698, DOI 10.17487/RFC6698, August
              2012, <>.

   [RFC6943]  Thaler, D., Ed., "Issues in Identifier Comparison for
              Security Purposes", RFC 6943, DOI 10.17487/RFC6943, May
              2013, <>.

   [RFC7288]  Thaler, D., "Reflections on Host Firewalls", RFC 7288,
              DOI 10.17487/RFC7288, June 2014,

Top      Up      ToC       Page 32 
              Masnick, M., "Homeland Security Seizes Spanish Domain Name
              That Had Already Been Declared Legal", 2011,

   [SAC-056]  ICANN SSAC, "SSAC Advisory on Impacts of Content Blocking
              via the Domain Name System", October 2012,

              Google, "Safe Browsing API", 2012,

              Moore, T. and R. Clayton, "The Impact of Incentives on
              Notice and Take-down", Workshop on the Economics of
              Information Security 2008,

   [Telex]    Wustrow, E., Wolchok, S., Goldberg, I., and J. Halderman,
              "Telex: Anticensorship in the Network Infrastructure",

   [Tor]      "Tor Project: Anonymity Online",

   [Wayback]  "Internet Archive: Wayback Machine",

IAB Members at the Time of Approval

   Jari Arkko
   Mary Barnes
   Marc Blanchet
   Ralph Droms
   Ted Hardie
   Joe Hildebrand
   Russ Housley
   Erik Nordmark
   Robert Sparks
   Andrew Sullivan
   Dave Thaler
   Brian Trammell
   Suzanne Woolf

Top      Up      ToC       Page 33 

   Thanks to the many reviewers who provided helpful comments,
   especially Bill Herrin, Eliot Lear, Patrik Faltstrom, Pekka Savola,
   and Russ White.  NLnet Labs is also acknowledged as Olaf Kolkman's
   employer during most of this document's development.

Authors' Addresses

   Richard Barnes
   Suite 300
   650 Castro Street
   Mountain View, CA  94041
   United States


   Alissa Cooper
   707 Tasman Drive
   Milpitas, CA  95035
   United States


   Olaf Kolkman
   Internet Society


   Dave Thaler
   One Microsoft Way
   Redmond, WA  98052
   United States


   Erik Nordmark