Tech-invite3GPPspaceIETFspace
959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 8578

Deterministic Networking Use Cases

Pages: 97
Informational
Part 2 of 8 – Pages 14 to 33
First   Prev   Next

Top   ToC   RFC8578 - Page 14   prevText

3. Electrical Utilities

3.1. Use Case Description

Many systems that an electrical utility deploys today rely on high availability and deterministic behavior of the underlying networks. Presented here are use cases for transmission, generation, and distribution, including key timing and reliability metrics. In addition, security issues and industry trends that affect the architecture of next-generation utility networks are discussed.

3.1.1. Transmission Use Cases

3.1.1.1. Protection
"Protection" means not only the protection of human operators but also the protection of the electrical equipment and the preservation of the stability and frequency of the grid. If a fault occurs in the transmission or distribution of electricity, then severe damage can occur to human operators, electrical equipment, and the grid itself, leading to blackouts. Communication links, in conjunction with protection relays, are used to selectively isolate faults on high-voltage lines, transformers, reactors, and other important electrical equipment. The role of the teleprotection system is to selectively disconnect a faulty part by transferring command signals within the shortest possible time.
3.1.1.1.1. Key Criteria
The key criteria for measuring teleprotection performance are command transmission time, dependability, and security. These criteria are defined by International Electrotechnical Commission (IEC) Standard 60834 [IEC-60834] as follows: o Transmission time (speed): The time between the moment when a state change occurs at the transmitter input and the moment of the corresponding change at the receiver output, including propagation delay. The overall operating time for a teleprotection system is the sum of (1) the time required to initiate the command at the transmitting end, (2) the propagation delay over the network (including equipment), and (3) the time required to make the necessary selections and decisions at the receiving end, including any additional delay due to a noisy environment.
Top   ToC   RFC8578 - Page 15
   o  Dependability: The ability to issue and receive valid commands in
      the presence of interference and/or noise, by minimizing the
      Probability of Missing Commands (PMC).  Dependability targets are
      typically set for a specific Bit Error Rate (BER) level.

   o  Security: The ability to prevent false tripping due to a noisy
      environment, by minimizing the Probability of Unwanted Commands
      (PUC).  Security targets are also set for a specific BER level.

   Additional elements of the teleprotection system that impact its
   performance include:

   o  Network bandwidth

   o  Failure recovery capacity (aka resiliency)

3.1.1.1.2. Fault Detection and Clearance Timing
Most power-line equipment can tolerate short circuits or faults for up to approximately five power cycles before sustaining irreversible damage or affecting other segments in the network. This translates to a total fault clearance time of 100 ms. As a safety precaution, however, the actual operation time of protection systems is limited to 70-80% of this period, including fault recognition time, command transmission time, and line breaker switching time. Some system components, such as large electromechanical switches, require a particularly long time to operate and take up the majority of the total clearance time, leaving only a 10 ms window for the telecommunications part of the protection scheme, independent of the distance of travel. Given the sensitivity of the issue, new networks impose requirements that are even more stringent: IEC Standard 61850-5:2013 [IEC-61850-5:2013] limits the transfer time for protection messages to 1/4-1/2 cycle or 4-8 ms (for 60 Hz lines) for messages considered the most critical.
Top   ToC   RFC8578 - Page 16
3.1.1.1.3. Symmetric Channel Delay
Teleprotection channels that are differential must be synchronous; this means that any delays on the transmit and receive paths must match each other. Ideally, teleprotection systems support zero asymmetric delay; typical legacy relays can tolerate delay discrepancies of up to 750 us. Some tools available for lowering delay variation below this threshold are as follows: o For legacy systems using Time-Division Multiplexing (TDM), jitter buffers at the multiplexers on each end of the line can be used to offset delay variation by queuing sent and received packets. The length of the queues must balance the need to regulate the rate of transmission with the need to limit overall delay, as larger buffers result in increased latency. o For jitter-prone IP networks, traffic management tools can ensure that the teleprotection signals receive the highest transmission priority to minimize jitter. o Standard packet-based synchronization technologies, such as the IEEE 1588-2008 Precision Time Protocol (PTP) [IEEE-1588] and synchronous Ethernet (syncE) [syncE], can help keep networks stable by maintaining a highly accurate clock source on the various network devices.
Top   ToC   RFC8578 - Page 17
3.1.1.1.4. Teleprotection Network Requirements
Table 1 captures the main network metrics. (These metrics are based on IEC Standard 61850-5:2013 [IEC-61850-5:2013].) +---------------------------------+---------------------------------+ | Teleprotection Requirement | Attribute | +---------------------------------+---------------------------------+ | One-way maximum delay | 4-10 ms | | | | | Asymmetric delay required | Yes | | | | | Maximum jitter | Less than 250 us (750 us for | | | legacy IEDs) | | | | | Topology | Point to point, point to | | | multipoint | | | | | Availability | 99.9999% | | | | | Precise timing required | Yes | | | | | Recovery time on node failure | Less than 50 ms - hitless | | | | | Performance management | Yes; mandatory | | | | | Redundancy | Yes | | | | | Packet loss | 0.1% to 1% | +---------------------------------+---------------------------------+ Table 1: Teleprotection Network Requirements
Top   ToC   RFC8578 - Page 18
3.1.1.1.5. Inter-trip Protection Scheme
"Inter-tripping" is the signal-controlled tripping of a circuit breaker to complete the isolation of a circuit or piece of apparatus in concert with the tripping of other circuit breakers. +---------------------------------+---------------------------------+ | Inter-trip Protection | Attribute | | Requirement | | +---------------------------------+---------------------------------+ | One-way maximum delay | 5 ms | | | | | Asymmetric delay required | No | | | | | Maximum jitter | Not critical | | | | | Topology | Point to point, point to | | | multipoint | | | | | Bandwidth | 64 kbps | | | | | Availability | 99.9999% | | | | | Precise timing required | Yes | | | | | Recovery time on node failure | Less than 50 ms - hitless | | | | | Performance management | Yes; mandatory | | | | | Redundancy | Yes | | | | | Packet loss | 0.1% | +---------------------------------+---------------------------------+ Table 2: Inter-trip Protection Network Requirements
3.1.1.1.6. Current Differential Protection Scheme
Current differential protection is commonly used for line protection and is typically used to protect parallel circuits. At both ends of the lines, the current is measured by the differential relays; both relays will trip the circuit breaker if the current going into the line does not equal the current going out of the line. This type of protection scheme assumes that some form of communication is present between the relays at both ends of the line, to allow both relays to compare measured current values. Line differential protection schemes assume that the telecommunications delay between both relays is very low -- often as low as 5 ms. Moreover, as those systems are
Top   ToC   RFC8578 - Page 19
   often not time-synchronized, they also assume that the delay over
   symmetric telecommunications paths is constant; this allows the
   comparison of current measurement values taken at exactly the
   same time.

   +---------------------------------+---------------------------------+
   | Current Differential Protection |            Attribute            |
   |           Requirement           |                                 |
   +---------------------------------+---------------------------------+
   |      One-way maximum delay      |               5 ms              |
   |                                 |                                 |
   |    Asymmetric delay required    |               Yes               |
   |                                 |                                 |
   |          Maximum jitter         |   Less than 250 us (750 us for  |
   |                                 |           legacy IEDs)          |
   |                                 |                                 |
   |             Topology            |     Point to point, point to    |
   |                                 |            multipoint           |
   |                                 |                                 |
   |            Bandwidth            |             64 kbps             |
   |                                 |                                 |
   |           Availability          |             99.9999%            |
   |                                 |                                 |
   |     Precise timing required     |               Yes               |
   |                                 |                                 |
   |  Recovery time on node failure  |    Less than 50 ms - hitless    |
   |                                 |                                 |
   |      Performance management     |          Yes; mandatory         |
   |                                 |                                 |
   |            Redundancy           |               Yes               |
   |                                 |                                 |
   |           Packet loss           |               0.1%              |
   +---------------------------------+---------------------------------+

             Table 3: Current Differential Protection Metrics
Top   ToC   RFC8578 - Page 20
3.1.1.1.7. Distance Protection Scheme
The distance (impedance relay) protection scheme is based on voltage and current measurements. The network metrics are similar (but not identical) to the metrics for current differential protection. +---------------------------------+---------------------------------+ | Distance Protection Requirement | Attribute | +---------------------------------+---------------------------------+ | One-way maximum delay | 5 ms | | | | | Asymmetric delay required | No | | | | | Maximum jitter | Not critical | | | | | Topology | Point to point, point to | | | multipoint | | | | | Bandwidth | 64 kbps | | | | | Availability | 99.9999% | | | | | Precise timing required | Yes | | | | | Recovery time on node failure | Less than 50 ms - hitless | | | | | Performance management | Yes; mandatory | | | | | Redundancy | Yes | | | | | Packet loss | 0.1% | +---------------------------------+---------------------------------+ Table 4: Distance Protection Requirements
3.1.1.1.8. Inter-substation Protection Signaling
This use case describes the exchange of sampled values and/or GOOSE (Generic Object Oriented Substation Events) messages between Intelligent Electronic Devices (IEDs) in two substations for protection and tripping coordination. The two IEDs are in master-slave mode. The Current Transformer or Voltage Transformer (CT/VT) in one substation sends the sampled analog voltage or current value to the Merging Unit (MU) over hard wire. The MU sends the time-synchronized sampled values (as specified by IEC 61850-9-2:2011 [IEC-61850-9-2:2011]) to the slave IED. The slave IED forwards the
Top   ToC   RFC8578 - Page 21
   information to the master IED in the other substation.  The master
   IED makes the determination (for example, based on sampled value
   differentials) to send a trip command to the originating IED.  Once
   the slave IED/relay receives the GOOSE message containing the command
   to trip the breaker, it opens the breaker.  It then sends a
   confirmation message back to the master.  All data exchanges between
   IEDs are through sampled values and/or GOOSE messages.

   +---------------------------------+---------------------------------+
   |   Inter-substation Protection   |            Attribute            |
   |           Requirement           |                                 |
   +---------------------------------+---------------------------------+
   |      One-way maximum delay      |               5 ms              |
   |                                 |                                 |
   |    Asymmetric delay required    |                No               |
   |                                 |                                 |
   |          Maximum jitter         |           Not critical          |
   |                                 |                                 |
   |             Topology            |     Point to point, point to    |
   |                                 |            multipoint           |
   |                                 |                                 |
   |            Bandwidth            |             64 kbps             |
   |                                 |                                 |
   |           Availability          |             99.9999%            |
   |                                 |                                 |
   |     Precise timing required     |               Yes               |
   |                                 |                                 |
   |  Recovery time on node failure  |    Less than 50 ms - hitless    |
   |                                 |                                 |
   |      Performance management     |          Yes; mandatory         |
   |                                 |                                 |
   |            Redundancy           |               Yes               |
   |                                 |                                 |
   |           Packet loss           |                1%               |
   +---------------------------------+---------------------------------+

             Table 5: Inter-substation Protection Requirements

3.1.1.2. Intra-substation Process Bus Communications
This use case describes the data flow from the CT/VT to the IEDs in the substation via the MU. The CT/VT in the substation sends the analog voltage or current values to the MU over hard wire. The MU converts the analog values into digital format (typically time-synchronized sampled values as specified by IEC 61850-9-2:2011 [IEC-61850-9-2:2011]) and sends them to the IEDs in the substation. The Global Positioning System (GPS) Master Clock can send 1PPS or IRIG-B format to the MU through a serial port or IEEE 1588 protocol
Top   ToC   RFC8578 - Page 22
   via a network.  1PPS (One Pulse Per Second) is an electrical signal
   that has a width of less than 1 second and a sharply rising or
   abruptly falling edge that accurately repeats once per second.  1PPS
   signals are output by radio beacons, frequency standards, other types
   of precision oscillators, and some GPS receivers.  IRIG (Inter-Range
   Instrumentation Group) time codes are standard formats for
   transferring timing information.  Atomic frequency standards and GPS
   receivers designed for precision timing are often equipped with an
   IRIG output.  Process bus communication using IEC 61850-9-2:2011
   [IEC-61850-9-2:2011] simplifies connectivity within the substation,
   removes the requirement for multiple serial connections, and removes
   the slow serial-bus architectures that are typically used.  This also
   ensures increased flexibility and increased speed with the use of
   multicast messaging between multiple devices.

   +---------------------------------+---------------------------------+
   |   Intra-substation Protection   |            Attribute            |
   |           Requirement           |                                 |
   +---------------------------------+---------------------------------+
   |      One-way maximum delay      |               5 ms              |
   |                                 |                                 |
   |    Asymmetric delay required    |                No               |
   |                                 |                                 |
   |          Maximum jitter         |           Not critical          |
   |                                 |                                 |
   |             Topology            |     Point to point, point to    |
   |                                 |            multipoint           |
   |                                 |                                 |
   |            Bandwidth            |             64 kbps             |
   |                                 |                                 |
   |           Availability          |             99.9999%            |
   |                                 |                                 |
   |     Precise timing required     |               Yes               |
   |                                 |                                 |
   |  Recovery time on node failure  |    Less than 50 ms - hitless    |
   |                                 |                                 |
   |      Performance management     |          Yes; mandatory         |
   |                                 |                                 |
   |            Redundancy           |            Yes or No            |
   |                                 |                                 |
   |           Packet loss           |               0.1%              |
   +---------------------------------+---------------------------------+

             Table 6: Intra-substation Protection Requirements
Top   ToC   RFC8578 - Page 23
3.1.1.3. Wide-Area Monitoring and Control Systems
The application of synchrophasor measurement data from Phasor Measurement Units (PMUs) to wide-area monitoring and control systems promises to provide important new capabilities for improving system stability. Access to PMU data enables more-timely situational awareness over larger portions of the grid than what has been possible historically with normal SCADA (Supervisory Control and Data Acquisition) data. Handling the volume and the real-time nature of synchrophasor data presents unique challenges for existing application architectures. The Wide-Area Management System (WAMS) makes it possible for the condition of the bulk power system to be observed and understood in real time so that protective, preventative, or corrective action can be taken. Because of the very high sampling rate of measurements and the strict requirement for time synchronization of the samples, the WAMS has stringent telecommunications requirements in an IP network, as captured in Table 7:
Top   ToC   RFC8578 - Page 24
   +---------------------------------+---------------------------------+
   |         WAMS Requirement        |            Attribute            |
   +---------------------------------+---------------------------------+
   |      One-way maximum delay      |              50 ms              |
   |                                 |                                 |
   |    Asymmetric delay required    |                No               |
   |                                 |                                 |
   |          Maximum jitter         |           Not critical          |
   |                                 |                                 |
   |             Topology            |     Point to point, point to    |
   |                                 |    multipoint, multipoint to    |
   |                                 |            multipoint           |
   |                                 |                                 |
   |            Bandwidth            |             100 kbps            |
   |                                 |                                 |
   |           Availability          |             99.9999%            |
   |                                 |                                 |
   |     Precise timing required     |               Yes               |
   |                                 |                                 |
   |  Recovery time on node failure  |    Less than 50 ms - hitless    |
   |                                 |                                 |
   |      Performance management     |          Yes; mandatory         |
   |                                 |                                 |
   |            Redundancy           |               Yes               |
   |                                 |                                 |
   |           Packet loss           |                1%               |
   |                                 |                                 |
   |     Consecutive packet loss     |     At least one packet per     |
   |                                 |    application cycle must be    |
   |                                 |            received.            |
   +---------------------------------+---------------------------------+

             Table 7: WAMS Special Communication Requirements
Top   ToC   RFC8578 - Page 25
3.1.1.4. WAN Engineering Guidelines Requirement Classification
The IEC has published a technical report (TR) that offers guidelines on how to define and deploy Wide-Area Networks (WANs) for the interconnection of electric substations, generation plants, and SCADA operation centers. IEC TR 61850-90-12:2015 [IEC-61850-90-12:2015] provides four classes of WAN communication requirements, as summarized in Table 8: +----------------+-----------+----------+----------+----------------+ | WAN | Class WA | Class WB | Class WC | Class WD | | Requirement | | | | | +----------------+-----------+----------+----------+----------------+ | Application | EHV | HV (High | MV | General- | | field | (Extra- | Voltage) | (Medium | purpose | | | High | | Voltage) | | | | Voltage) | | | | | | | | | | | Latency | 5 ms | 10 ms | 100 ms | >100 ms | | | | | | | | Jitter | 10 us | 100 us | 1 ms | 10 ms | | | | | | | | Latency | 100 us | 1 ms | 10 ms | 100 ms | | asymmetry | | | | | | | | | | | | Time accuracy | 1 us | 10 us | 100 us | 10 to 100 ms | | | | | | | | BER | 10^-7 to | 10^-5 to | 10^-3 | | | | 10^-6 | 10^-4 | | | | | | | | | | Unavailability | 10^-7 to | 10^-5 to | 10^-3 | | | | 10^-6 | 10^-4 | | | | | | | | | | Recovery delay | Zero | 50 ms | 5 s | 50 s | | | | | | | | Cybersecurity | Extremely | High | Medium | Medium | | | high | | | | +----------------+-----------+----------+----------+----------------+ Table 8: Communication Requirements (Courtesy of IEC TR 61850-90-12:2015)
Top   ToC   RFC8578 - Page 26

3.1.2. Generation Use Case

Energy generation systems are complex infrastructures that require control of both the generated power and the generation infrastructure.
3.1.2.1. Control of the Generated Power
The electrical power generation frequency must be maintained within a very narrow band. Deviations from the acceptable frequency range are detected, and the required signals are sent to the power plants for frequency regulation. Automatic Generation Control (AGC) is a system for adjusting the power output of generators at different power plants, in response to changes in the load. +---------------------------------+---------------------------------+ | FCAG (Frequency Control | Attribute | | Automatic Generation) | | | Requirement | | +---------------------------------+---------------------------------+ | One-way maximum delay | 500 ms | | | | | Asymmetric delay required | No | | | | | Maximum jitter | Not critical | | | | | Topology | Point to point | | | | | Bandwidth | 20 kbps | | | | | Availability | 99.999% | | | | | Precise timing required | Yes | | | | | Recovery time on node failure | N/A | | | | | Performance management | Yes; mandatory | | | | | Redundancy | Yes | | | | | Packet loss | 1% | +---------------------------------+---------------------------------+ Table 9: FCAG Communication Requirements
Top   ToC   RFC8578 - Page 27
3.1.2.2. Control of the Generation Infrastructure
The control of the generation infrastructure combines requirements from industrial automation systems and energy generation systems. This section describes the use case for control of the generation infrastructure of a wind turbine. Figure 1 presents the subsystems that operate a wind turbine. | | | +-----------------+ | | +----+ | | | |WTRM| WGEN | WROT x==|===| | | | | +----+ WCNV| | |WNAC | | +---+---WYAW---+--+ | | | | | | +----+ |WTRF | |WMET| | | | | Wind Turbine | +--+-+ Controller | | WTUR | | | WREP | | | WSLG | | | WALG | WTOW | | Figure 1: Wind Turbine Control Network The subsystems shown in Figure 1 include the following: o WROT (rotor control) o WNAC (nacelle control) (nacelle: housing containing the generator) o WTRM (transmission control) o WGEN (generator) o WYAW (yaw controller) (of the tower head) o WCNV (in-turbine power converter) o WTRF (wind turbine transformer information)
Top   ToC   RFC8578 - Page 28
   o  WMET (external meteorological station providing real-time
      information to the tower's controllers)

   o  WTUR (wind turbine general information)

   o  WREP (wind turbine report information)

   o  WSLG (wind turbine state log information)

   o  WALG (wind turbine analog log information)

   o  WTOW (wind turbine tower information)

   Traffic characteristics relevant to the network planning and
   dimensioning process in a wind turbine scenario are listed below.
   The values in this section are based mainly on the relevant
   references [Ahm14] and [Spe09].  Each logical node (Figure 1) is a
   part of the metering network and produces analog measurements and
   status information that must comply with their respective data-rate
   constraints.

   +-----------+--------+----------+-----------+-----------+-----------+
   | Subsystem | Sensor |  Analog  | Data Rate |   Status  | Data Rate |
   |           | Count  |  Sample  | (bytes/s) |   Sample  | (bytes/s) |
   |           |        |  Count   |           |   Count   |           |
   +-----------+--------+----------+-----------+-----------+-----------+
   |    WROT   |   14   |    9     |    642    |     5     |     10    |
   |           |        |          |           |           |           |
   |    WTRM   |   18   |    10    |    2828   |     8     |     16    |
   |           |        |          |           |           |           |
   |    WGEN   |   14   |    12    |   73764   |     2     |     4     |
   |           |        |          |           |           |           |
   |    WCNV   |   14   |    12    |   74060   |     2     |     4     |
   |           |        |          |           |           |           |
   |    WTRF   |   12   |    5     |   73740   |     2     |     4     |
   |           |        |          |           |           |           |
   |    WNAC   |   12   |    9     |    112    |     3     |     6     |
   |           |        |          |           |           |           |
   |    WYAW   |   7    |    8     |    220    |     4     |     8     |
   |           |        |          |           |           |           |
   |    WTOW   |   4    |    1     |     8     |     3     |     6     |
   |           |        |          |           |           |           |
   |    WMET   |   7    |    7     |    228    |     -     |     -     |
   +-----------+--------+----------+-----------+-----------+-----------+

               Table 10: Wind Turbine Data-Rate Constraints
Top   ToC   RFC8578 - Page 29
   QoS constraints for different services are presented in Table 11.
   These constraints are defined by IEEE Standard 1646 [IEEE-1646] and
   IEC Standard 61400 Part 25 [IEC-61400-25].

   +---------------------+---------+-------------+---------------------+
   |       Service       | Latency | Reliability |   Packet Loss Rate  |
   +---------------------+---------+-------------+---------------------+
   |  Analog measurement |  16 ms  |    99.99%   |        <10^-6       |
   |                     |         |             |                     |
   |  Status information |  16 ms  |    99.99%   |        <10^-6       |
   |                     |         |             |                     |
   |  Protection traffic |   4 ms  |   100.00%   |        <10^-9       |
   |                     |         |             |                     |
   |    Reporting and    |   1 s   |    99.99%   |        <10^-6       |
   |       logging       |         |             |                     |
   |                     |         |             |                     |
   |  Video surveillance |   1 s   |    99.00%   |     No specific     |
   |                     |         |             |     requirement     |
   |                     |         |             |                     |
   | Internet connection |  60 min |    99.00%   |     No specific     |
   |                     |         |             |     requirement     |
   |                     |         |             |                     |
   |   Control traffic   |  16 ms  |   100.00%   |        <10^-9       |
   |                     |         |             |                     |
   |     Data polling    |  16 ms  |    99.99%   |        <10^-6       |
   +---------------------+---------+-------------+---------------------+

        Table 11: Wind Turbine Reliability and Latency Constraints

3.1.2.2.1. Intra-domain Network Considerations
A wind turbine is composed of a large set of subsystems, including sensors and actuators that require time-critical operation. The reliability and latency constraints of these different subsystems are shown in Table 11. These subsystems are connected to an intra-domain network that is used to monitor and control the operation of the turbine and connect it to the SCADA subsystems. The different components are interconnected using fiber optics, industrial buses, industrial Ethernet, EtherCAT [EtherCAT], or a combination thereof. Industrial signaling and control protocols such as Modbus [MODBUS], PROFIBUS [PROFIBUS], PROFINET [PROFINET], and EtherCAT are used directly on top of the Layer 2 transport or encapsulated over TCP/IP. The data collected from the sensors and condition-monitoring systems is multiplexed onto fiber cables for transmission to the base of the tower and to remote control centers. The turbine controller continuously monitors the condition of the wind turbine and collects
Top   ToC   RFC8578 - Page 30
   statistics on its operation.  This controller also manages a large
   number of switches, hydraulic pumps, valves, and motors within the
   wind turbine.

   There is usually a controller at the bottom of the tower and also in
   the nacelle.  The communication between these two controllers usually
   takes place using fiber optics instead of copper links.  Sometimes, a
   third controller is installed in the hub of the rotor and manages the
   pitch of the blades.  That unit usually communicates with the nacelle
   unit using serial communications.

3.1.2.2.2. Inter-domain Network Considerations
A remote control center belonging to a grid operator regulates the power output, enables remote actuation, and monitors the health of one or more wind parks in tandem. It connects to the local control center in a wind park over the Internet (Figure 2) via firewalls at both ends. The Autonomous System (AS) path between the local control center and the wind park typically involves several ISPs at different tiers. For example, a remote control center in Denmark can regulate a wind park in Greece over the normal public AS path between the two locations. +--------------+ | | | | | Wind Park #1 +----+ | | | XXXXXX | | | X XXXXXXXX +----------------+ +--------------+ | XXXX X XXXXX | | +---+ XXX | Remote Control | XXX Internet +----+ Center | +----+X XXX | | +--------------+ | XXXXXXX XX | | | | | XX XXXXXXX +----------------+ | | | XXXXX | Wind Park #2 +----+ | | | | +--------------+ Figure 2: Wind Turbine Control via Internet The remote control center is part of the SCADA system, setting the desired power output to the wind park and reading back the result once the new power output level has been set. Traffic between the remote control center and the wind park typically consists of protocols like IEC 60870-5-104 [IEC-60870-5-104], OPC XML-Data Access
Top   ToC   RFC8578 - Page 31
   (XML-DA) [OPCXML], Modbus [MODBUS], and SNMP [RFC3411].  At the time
   of this writing, traffic flows between the remote control center and
   the wind park are best effort.  QoS requirements are not strict, so
   no Service Level Agreements (SLAs) or service-provisioning mechanisms
   (e.g., VPNs) are employed.  In the case of such events as equipment
   failure, tolerance for alarm delay is on the order of minutes, due to
   redundant systems already in place.

   Future use cases will require bounded latency, bounded jitter, and
   extraordinarily low packet loss for inter-domain traffic flows due to
   the softwarization and virtualization of core wind-park equipment
   (e.g., switches, firewalls, and SCADA server components).  These
   factors will create opportunities for service providers to install
   new services and dynamically manage them from remote locations.  For
   example, to enable failover of a local SCADA server, a SCADA server
   in another wind-park site (under the administrative control of the
   same operator) could be utilized temporarily (Figure 3).  In that
   case, local traffic would be forwarded to the remote SCADA server,
   and existing intra-domain QoS and timing parameters would have to be
   met for inter-domain traffic flows.

   +--------------+
   |              |
   |              |
   | Wind Park #1 +----+
   |              |    |      XXXXXX
   |              |    |      X    XXXXXXXX           +----------------+
   +--------------+    |   XXXX           XXXXX       |                |
                       +---+      Operator-   XXX     | Remote Control |
                           XXX    Administered   +----+     Center     |
                       +----+X    WAN         XXX     |                |
   +--------------+    |    XXXXXXX             XX    |                |
   |              |    |          XX     XXXXXXX      +----------------+
   |              |    |            XXXXX
   | Wind Park #2 +----+
   |              |
   |              |
   +--------------+

       Figure 3: Wind Turbine Control via Operator-Administered WAN
Top   ToC   RFC8578 - Page 32

3.1.3. Distribution Use Case

3.1.3.1. Fault Location, Isolation, and Service Restoration (FLISR)
"Fault Location, Isolation, and Service Restoration (FLISR)" refers to the ability to automatically locate the fault, isolate the fault, and restore service in the distribution network. This will likely be the first widespread application of distributed intelligence in the grid. The static power-switch status (open/closed) in the network dictates the power flow to secondary substations. Reconfiguring the network in the event of a fault is typically done manually on site to energize/de-energize alternate paths. Automating the operation of substation switchgear allows the flow of power to be altered automatically under fault conditions. FLISR can be managed centrally from a Distribution Management System (DMS) or executed locally through distributed control via intelligent switches and fault sensors.
Top   ToC   RFC8578 - Page 33
   +---------------------------------+---------------------------------+
   |        FLISR Requirement        |            Attribute            |
   +---------------------------------+---------------------------------+
   |      One-way maximum delay      |              80 ms              |
   |                                 |                                 |
   |    Asymmetric delay required    |                No               |
   |                                 |                                 |
   |          Maximum jitter         |              40 ms              |
   |                                 |                                 |
   |             Topology            |     Point to point, point to    |
   |                                 |    multipoint, multipoint to    |
   |                                 |            multipoint           |
   |                                 |                                 |
   |            Bandwidth            |             64 kbps             |
   |                                 |                                 |
   |           Availability          |             99.9999%            |
   |                                 |                                 |
   |     Precise timing required     |               Yes               |
   |                                 |                                 |
   |  Recovery time on node failure  |    Depends on customer impact   |
   |                                 |                                 |
   |      Performance management     |          Yes; mandatory         |
   |                                 |                                 |
   |            Redundancy           |               Yes               |
   |                                 |                                 |
   |           Packet loss           |               0.1%              |
   +---------------------------------+---------------------------------+

                Table 12: FLISR Communication Requirements



(page 33 continued on part 3)

Next Section