tech-invite   World Map     

IETF     RFCs     Groups     SIP     ABNFs    |    3GPP     Specs     Gloss.     Arch.     IMS     UICC    |    Misc.    |    search     info

RFC 7754

Informational
Pages: 33
Top     in Index     Prev     Next
in Group Index     Prev in Group     Next in Group     Group: IAB

Technical Considerations for Internet Service Blocking and Filtering

Part 1 of 2, p. 1 to 11
None       Next RFC Part

 


Top       ToC       Page 1 
Internet Architecture Board (IAB)                              R. Barnes
Request for Comments: 7754                                     A. Cooper
Category: Informational                                       O. Kolkman
ISSN: 2070-1721                                                D. Thaler
                                                             E. Nordmark
                                                              March 2016


  Technical Considerations for Internet Service Blocking and Filtering

Abstract

   The Internet is structured to be an open communications medium.  This
   openness is one of the key underpinnings of Internet innovation, but
   it can also allow communications that may be viewed as undesirable by
   certain parties.  Thus, as the Internet has grown, so have mechanisms
   to limit the extent and impact of abusive or objectionable
   communications.  Recently, there has been an increasing emphasis on
   "blocking" and "filtering", the active prevention of such
   communications.  This document examines several technical approaches
   to Internet blocking and filtering in terms of their alignment with
   the overall Internet architecture.  When it is possible to do so, the
   approach to blocking and filtering that is most coherent with the
   Internet architecture is to inform endpoints about potentially
   undesirable services, so that the communicants can avoid engaging in
   abusive or objectionable communications.  We observe that certain
   filtering and blocking approaches can cause unintended consequences
   to third parties, and we discuss the limits of efficacy of various
   approaches.

Status of This Memo

   This document is not an Internet Standards Track specification; it is
   published for informational purposes.

   This document is a product of the Internet Architecture Board (IAB)
   and represents information that the IAB has deemed valuable to
   provide for permanent record.  It represents the consensus of the
   Internet Architecture Board (IAB).  Documents approved for
   publication by the IAB are not a candidate for any level of Internet
   Standard; see Section 2 of RFC 5741.

   Information about the current status of this document, any errata,
   and how to provide feedback on it may be obtained at
   http://www.rfc-editor.org/info/rfc7754.

Page 2 
Copyright Notice

   Copyright (c) 2016 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.

Top       Page 3 
Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   4
   2.  Filtering Examples  . . . . . . . . . . . . . . . . . . . . .   5
   3.  Characteristics of Blocking Systems . . . . . . . . . . . . .   7
     3.1.  The Party Who Sets Blocking Policies  . . . . . . . . . .   8
     3.2.  Purposes of Blocking  . . . . . . . . . . . . . . . . . .   8
       3.2.1.  Blacklist vs. Whitelist Model . . . . . . . . . . . .   9
     3.3.  Intended Targets of Blocking  . . . . . . . . . . . . . .   9
     3.4.  Components Used for Blocking  . . . . . . . . . . . . . .  10
   4.  Evaluation of Blocking Design Patterns  . . . . . . . . . . .  11
     4.1.  Criteria for Evaluation . . . . . . . . . . . . . . . . .  11
       4.1.1.  Scope: What set of hosts and users are affected?  . .  12
       4.1.2.  Granularity: How specific is the blocking?  Will
               blocking one service also block others? . . . . . . .  12
       4.1.3.  Efficacy: How easy is it for a resource or service to
               avoid being blocked?  . . . . . . . . . . . . . . . .  13
       4.1.4.  Security: How does the blocking impact existing trust
               infrastructures?  . . . . . . . . . . . . . . . . . .  14
     4.2.  Network-Based Blocking  . . . . . . . . . . . . . . . . .  15
       4.2.1.  Scope . . . . . . . . . . . . . . . . . . . . . . . .  16
       4.2.2.  Granularity . . . . . . . . . . . . . . . . . . . . .  17
       4.2.3.  Efficacy and Security . . . . . . . . . . . . . . . .  17
       4.2.4.  Summary . . . . . . . . . . . . . . . . . . . . . . .  20
     4.3.  Rendezvous-Based Blocking . . . . . . . . . . . . . . . .  20
       4.3.1.  Scope . . . . . . . . . . . . . . . . . . . . . . . .  21
       4.3.2.  Granularity . . . . . . . . . . . . . . . . . . . . .  21
       4.3.3.  Efficacy  . . . . . . . . . . . . . . . . . . . . . .  21
       4.3.4.  Security and Other Implications . . . . . . . . . . .  22
       4.3.5.  Examples  . . . . . . . . . . . . . . . . . . . . . .  22
       4.3.6.  Summary . . . . . . . . . . . . . . . . . . . . . . .  23
     4.4.  Endpoint-Based Blocking . . . . . . . . . . . . . . . . .  24
       4.4.1.  Scope . . . . . . . . . . . . . . . . . . . . . . . .  24
       4.4.2.  Granularity . . . . . . . . . . . . . . . . . . . . .  24
       4.4.3.  Efficacy  . . . . . . . . . . . . . . . . . . . . . .  25
       4.4.4.  Security  . . . . . . . . . . . . . . . . . . . . . .  25
       4.4.5.  Server Endpoints  . . . . . . . . . . . . . . . . . .  25
       4.4.6.  Summary . . . . . . . . . . . . . . . . . . . . . . .  26
   5.  Security Considerations . . . . . . . . . . . . . . . . . . .  26
   6.  Conclusion  . . . . . . . . . . . . . . . . . . . . . . . . .  27
   7.  Informative References  . . . . . . . . . . . . . . . . . . .  28
   IAB Members at the Time of Approval . . . . . . . . . . . . . . .  32
   Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . .  33
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  33

Top      ToC       Page 4 
1.  Introduction

   The original design goal of the Internet was to enable communications
   between hosts.  As this goal was met and people started using the
   Internet to communicate, however, it became apparent that some hosts
   were engaging in communications that were viewed as undesirable by
   certain parties.  The most famous early example of undesirable
   communications was the Morris worm [Morris], which used the Internet
   to infect many hosts in 1988.  As the Internet has evolved into a
   rich communications medium, so too have mechanisms to restrict
   communications viewed as undesirable, ranging from acceptable use
   policies enforced through informal channels to technical blocking
   mechanisms.

   Efforts to restrict or deny access to Internet resources and services
   have evolved over time.  As noted in [RFC4084], some Internet service
   providers perform filtering to restrict which applications their
   customers may use and which traffic they allow on their networks.
   These restrictions are often imposed with customer consent, where
   customers may be enterprises or individuals.  However, governments,
   service providers, and enterprises are increasingly seeking to block
   or filter access to certain content, traffic, or services without the
   knowledge or agreement of affected users.  Where these organizations
   do not directly control networks themselves, they commonly aim to
   make use of intermediary systems to implement the blocking or
   filtering.

   While blocking and filtering remain highly contentious in many cases,
   the desire to restrict communications or access to content will
   likely continue to exist.

   The difference between "blocking" and "filtering" is a matter of
   scale and perspective.  "Blocking" often refers to preventing access
   to resources in the aggregate, while "filtering" refers to preventing
   access to specific resources within an aggregate.  Both blocking and
   filtering can be implemented at the level of "services" (web hosting
   or video streaming, for example) or at the level of particular
   "content."  For the analysis presented in this document, the
   distinction between blocking and filtering does not create
   meaningfully different conclusions.  Hence, in the remainder of this
   document, we will treat the terms as being generally equivalent and
   applicable to restrictions on both content and services.

   This document aims to clarify the technical implications and trade-
   offs of various blocking strategies and to identify the potential for
   different strategies to potentially cause harmful side effects
   ("collateral damage") for Internet users and the overall Internet
   architecture.  This analysis is limited to technical blocking

Top      ToC       Page 5 
   mechanisms.  The scope of the analyzed blocking is limited to
   intentional blocking, not accidental blocking due to misconfiguration
   or as an unintentional side effect of something else.

   Filtering may be considered legal, illegal, ethical, or unethical in
   different places, at different times, and by different parties.  This
   document is intended for those who are conducting filtering or are
   considering conducting filtering and want to understand the
   implications of their decisions with respect to the Internet
   architecture and the trade-offs that come with each type of filtering
   strategy.  This document does not present formulas on how to make
   those trade-offs; it is likely that filtering decisions require
   knowledge of context-specific details.  Whether particular forms of
   filtering are lawful in particular jurisdictions raises complicated
   legal questions that are outside the scope of this document.  For
   similar reasons, questions about the ethics of particular forms of
   filtering are also out of scope.

2.  Filtering Examples

   Blocking systems have evolved alongside the Internet technologies
   they seek to restrict.  Looking back at the history of the Internet,
   there have been several such systems deployed by different parties
   and for different purposes.

   Firewalls: Firewalls of various sorts are very commonly employed at
   many points in today's Internet [RFC2979].  They can be deployed
   either on end hosts (under user or administrator control) or in the
   network, typically at network boundaries.  While the Internet
   Security Glossary [RFC4949] contains an extended definition of a
   firewall, informally, most people would tend to think of a firewall
   as simply "something that blocks unwanted traffic" (see [RFC4948] for
   a discussion on many types of unwanted traffic).  While there are
   many sorts of firewalls, there are several specific types of firewall
   functionality worth noting.

   o  Stateless Packet Filtering: Stateless packet filters block
      according to content-neutral rules, e.g., blocking all inbound
      connections or outbound connections on certain ports, protocols,
      or network-layer addresses.  For example, blocking outbound
      connections to port 25.

   o  Stateful Packet Filtering: More advanced configurations require
      keeping state used to enforce flow-based policies, e.g., blocking
      inbound traffic for flows that have not been established.

Top      ToC       Page 6 
   o  Deep Packet Inspection: Yet more advanced configurations perform
      deep packet inspection and filter or block based on the content
      carried.  Many firewalls include web filtering capabilities (see
      below).

   Web Filtering: HTTP and HTTPS are common targets for blocking and
   filtering, typically targeted at specific URIs.  Some enterprises use
   HTTP blocking to block non-work-appropriate web sites, and several
   nations require HTTP and HTTPS filtering by their ISPs in order to
   block content deemed illegal.  HTTPS is a challenge for these
   systems, because the URI in an HTTPS request is carried inside the
   encrypted channel.  To block access to content made accessible via
   HTTPS, filtering systems thus must either block based on network- and
   transport-layer headers (IP address and/or port), or else obtain a
   trust anchor certificate that is trusted by endpoints (and thus act
   as a man in the middle).  These filtering systems often take the form
   of "portals" or "enterprise proxies" presenting their own,
   dynamically generated HTTPS certificates.  (See further discussion in
   Section 5.)

   Spam Filtering: Spam filtering is one of the oldest forms of content
   filtering.  Spam filters evaluate messages based on a variety of
   criteria and information sources to decide whether a given message is
   spam.  For example, DNS Blacklists use the reverse DNS to flag
   whether an IP address is a known spam source [RFC5782].  Spam filters
   can be installed on user devices (e.g., in a mail client), operated
   by a mail domain on behalf of users, or outsourced to a third party
   that acts as an intermediate MX proxy.

   Domain Name Seizure: A number of approaches are used to block or
   modify resolution of a domain name.  One approach is to make use of
   ICANN's Uniform Dispute Resolution Policy (URDP) for the purposes of
   dealing with fraudulent use of a name.  Other authorities may require
   that domains be blocked within their jurisdictions.  Substantial
   research has been performed on the value and efficacy of such
   seizures [Takedown08] [BlackLists14].

   The precise method of how domain names are seized will vary from
   place to place.  One approach in use is for queries to be redirected
   to resolve to IP addresses of the authority that hosts information
   about the seizure.  The effectiveness of domain seizures will
   similarly vary based on the method.  In some cases, the person whose
   name was seized will simply use a new name.  In other cases, the
   block may only be effective within a region or when specific name
   service infrastructure is used.

Top      ToC       Page 7 
   Seizures can also have overbroad effects, since access to content is
   blocked not only within the jurisdiction of the seizure, but
   globally, even when it may be affirmatively legal elsewhere
   [RojaDirecta].  When domain redirection is effected via redirections
   at intermediate resolvers rather than at authoritative servers, it
   directly contradicts end-to-end assumptions in the DNS security
   architecture [RFC4033], potentially causing validation failures by
   validating end-nodes.

   Safe Browsing: Modern web browsers provide some measures to prevent
   users from accessing malicious web sites.  For instance, before
   loading a URI, current versions of Google Chrome and Firefox use the
   Google Safe Browsing service to determine whether or not a given URI
   is safe to load [SafeBrowsing].  The DNS can also be used to store
   third party information that mark domains as safe or unsafe
   [RFC5782].

   Manipulation of routing and addressing data: Governments have
   recently intervened in the management of IP addressing and routing
   information in order to maintain control over a specific set of DNS
   servers.  As part of an internationally coordinated response to the
   DNSChanger malware, a Dutch court ordered the RIPE NCC to freeze the
   accounts of several resource holders as a means to limit the resource
   holders' ability to use certain address blocks [GhostClickRIPE] (also
   see Section 4.3).  These actions have led to concerns that the number
   resource certification system and related secure routing technologies
   developed by the IETF's SIDR working group might be subject to
   government manipulation as well [RFC6480], potentially for the
   purpose of denying targeted networks access to the Internet.

   Ingress filtering: Network service providers use ingress filtering
   [RFC2827] [RFC3704] as a means to prevent source address spoofing
   which is used as a part of other attacks.

   Data loss prevention (DLP): Enterprise and other networks are
   concerned with potential leaking of confidential information, whether
   accidental or intentional.  Some of the tools used for this are
   similar to the main subject of this document of blocking and
   filtering.  In particular, enterprise proxies might be part of a DLP
   solution.

3.  Characteristics of Blocking Systems

   At a generic level, blocking systems can be characterized by four
   attributes: the party who sets the blocking policy, the purpose of
   the blocking, the intended target of the blocking, and the Internet
   component(s) used as the basis of the blocking system.

Top      ToC       Page 8 
3.1.  The Party Who Sets Blocking Policies

   Parties that institute blocking policies include governments, courts,
   enterprises, network operators, reputation trackers, application
   providers, and individual end users.  A government might create laws
   based on cultural norms and/or their elected mandate.  Enterprises
   might use cultural, industry, or legal norms to guide their policies.

   There can be several steps of translation and transformation from the
   original intended purpose -- first to laws, then to (government)
   regulation, followed by high-level policies in, e.g., network
   operators, and from those policies to filtering architecture and
   implementation.  Each of those steps is a potential source of
   unintended consequences as discussed in this document.

   In some cases, the policy setting entity is the same as the entity
   that enforces the policy.  For example, a network operator might
   install a firewall in its own networking equipment, or a web
   application provider might block responses between its web server and
   certain clients.

   In other cases, the policy setting entity is different from the
   entity that enforces the policy.  Such policy might be imposed upon
   the enforcing entity, such as in the case of blocking initiated by
   governments, or the enforcing entity might explicitly choose to use
   policy set by others, such as in the case of a reputation system used
   by a spam filter or safe browsing service.  Because a policy might be
   enforced by others, it is best if it can be expressed in a form that
   is independent of the enforcing technology.

3.2.  Purposes of Blocking

   There are a variety of motivations to filter:

   o  Preventing or responding to security threats.  Network operators,
      enterprises, application providers, and end users often block
      communications that are believed to be associated with security
      threats or network attacks.

   o  Restricting objectionable content or services.  Certain
      communications may be viewed as undesirable, harmful, or illegal
      by particular governments, enterprises, or users.  Governments may
      seek to block communications that are deemed to be defamation,
      hate speech, obscenity, intellectual property infringement, or
      otherwise objectionable.  Enterprises may seek to restrict
      employees from accessing content that is not deemed to be work
      appropriate.  Parents may restrict their children from accessing
      content or services targeted for adults.

Top      ToC       Page 9 
   o  Restricting access based on business arrangements.  Some networks
      are designed so as to only provide access to certain content or
      services ("walled gardens"), or to only provide limited access
      until end users pay for full Internet services (captive portals
      provided by hotspot operators, for example).

3.2.1.  Blacklist vs. Whitelist Model

   Note that the purpose for which blocking occurs often dictates
   whether the blocking system operates on a blacklist model, where
   communications are allowed by default but a subset are blocked, or a
   whitelist model, where communications are blocked by default with
   only a subset allowed.  Captive portals, walled gardens, and
   sandboxes used for security or network endpoint assessment usually
   require a whitelist model since the scope of communications allowed
   is narrow.  Blocking for other purposes often uses a blacklist model
   since only individual content or traffic is intended to be blocked.

3.3.  Intended Targets of Blocking

   Blocking systems are instituted so as to target particular content,
   services, endpoints, or some combination of these.  For example, a
   "content" filtering system used by an enterprise might block access
   to specific URIs whose content is deemed by the enterprise to be
   inappropriate for the workplace.  This is distinct from a "service"
   filtering system that blocks all web traffic (perhaps as part of a
   parental control system on an end-user device) and also distinct from
   an "endpoint" filtering system in which a web application blocks
   traffic from specific endpoints that are suspected of malicious
   activity.

   As discussed in Section 4, the design of a blocking system may affect
   content, services, or endpoints other than those that are the
   intended targets.  For example, when domain name seizures described
   above are intended to address specific web pages associated with
   illegal activity, by removing the domains from use, they affect all
   services made available by the hosts associated with those names,
   including mail services and web services that may be unrelated to the
   illegal activity.  Depending on where the block is imposed within the
   DNS hierarchy, entirely unrelated organizations may be impacted.

Top      ToC       Page 10 
3.4.  Components Used for Blocking

   Broadly speaking, the process of delivering an Internet service
   involves three different components:

   1.  Endpoints: The actual content of the service is typically an
       application-layer protocol between two or more Internet hosts.
       In many protocols, there are two endpoints, a client and a
       server.

   2.  Network services: The endpoints communicate by way of a
       collection of IP networks that use routing protocols to determine
       how to deliver packets between the endpoints.

   3.  Rendezvous services: Service endpoints are typically identified
       by identifiers that are more "human-friendly" than IP addresses.
       Rendezvous services allow one endpoint to figure out how to
       contact another endpoint based on an identifier.  An example of a
       rendezvous service is the domain name system.  Distributed Hash
       Tables (DHTs) have also been used as rendezvous services.

   Consider, for example, an HTTP transaction fetching the content of
   the URI <http://example.com/index.html>.  The client endpoint is an
   end host running a browser.  The client uses the DNS as a rendezvous
   service when it performs a AAAA query to obtain the IP address for
   the server name "example.com".  The client then establishes a
   connection to the server, and sends the actual HTTP request.  The
   server endpoint then responds to the HTTP request.

   As another example, in the SIP protocol, the two endpoints
   communicating are IP phones, and the rendezvous service is provided
   by an application-layer SIP proxy as well as the DNS.

   Blocking access to Internet content, services, or endpoints is done
   by controlling one or more of the components involved in the
   provision of the communications involved in accessing the content,
   services, or endpoints.  In the HTTP example above, the successful
   completion of the HTTP request could have been prevented in several
   ways:

   o  [Endpoint] Preventing the client from making the request

   o  [Endpoint] Preventing the server from responding to the request

   o  [Endpoint] Preventing the client from making the DNS request
      needed to resolve example.com

   o  [Network] Preventing the request from reaching the server

Top      ToC       Page 11 
   o  [Network] Preventing the response from reaching the client

   o  [Network] Preventing the client from reaching the DNS servers

   o  [Network] Preventing the DNS responses from reaching the client

   o  [Rendezvous] Preventing the DNS servers from providing the client
      the correct IP address of the server

   Those who desire to block communications will typically have access
   to only one or two components; therefore their choices for how to
   perform blocking will be limited.  End users and application
   providers can usually only control their own software and hardware,
   which means that they are limited to endpoint-based filtering.  Some
   network operators offer filtering services that their customers can
   activate individually, in which case end users might have network-
   based filtering systems available to them.  Network operators can
   control their own networks and the rendezvous services for which they
   provide infrastructure support (e.g., DNS resolvers) or to which they
   may have access (e.g., SIP proxies), but not usually endpoints.
   Enterprises usually have access to their own networks and endpoints
   for filtering purposes.  Governments might make arrangements with the
   operators or owners of any of the three components that exist within
   their jurisdictions to perform filtering.

   In the next section, blocking systems designed according to each of
   the three patterns -- network services, rendezvous services, and
   endpoints -- are evaluated for their technical and architectural
   implications.  The analysis is as agnostic as possible as to who sets
   the blocking policy (government, end user, network operator,
   application provider, or enterprise), but in some cases the way in
   which a particular blocking design pattern is used might differ,
   depending on the who desires a block.  For example, a network-based
   firewall provided by an ISP that parents can elect to use for
   parental control purposes will likely function differently from one
   that all ISPs in a particular jurisdiction are required to use by the
   local government, even though in both cases the same component
   (network) forms the basis of the blocking system.



(page 11 continued on part 2)

Next RFC Part