Tech-invite3GPPspaceIETFspace
959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 8578

Deterministic Networking Use Cases

Pages: 97
Informational
Part 5 of 8 – Pages 54 to 72
First   Prev   Next

Top   ToC   RFC8578 - Page 54   prevText

6. Cellular Radio

6.1. Use Case Description

This use case describes the application of deterministic networking in the context of cellular telecom transport networks. Important elements include time synchronization, clock distribution, and ways to establish time-sensitive streams for both Layer 2 and Layer 3 user-plane traffic.

6.1.1. Network Architecture

Figure 10 illustrates a 3GPP-defined cellular network architecture typical at the time of this writing. The architecture includes "Fronthaul", "Midhaul", and "Backhaul" network segments. The "Fronthaul" is the network connecting base stations (Baseband Units (BBUs)) to the Remote Radio Heads (RRHs) (also referred to here as "antennas"). The "Midhaul" is the network that interconnects base
Top   ToC   RFC8578 - Page 55
   stations (or small-cell sites).  The "Backhaul" is the network or
   links connecting the radio base station sites to the network
   controller/gateway sites (i.e., the core of the 3GPP cellular
   network).

              Y (RRHs (antennas))
               \
           Y__  \.--.                   .--.         +------+
              \_(    `.     +---+     _(    `.       | 3GPP |
       Y------( Front- )----|eNB|----( Back-  )------| core |
             ( `  .haul )   +---+   ( ` .haul) )     | netw |
             /`--(___.-'      \      `--(___.-'      +------+
          Y_/     /            \.--.       \
               Y_/            _(Mid-`.      \
                             (   haul )      \
                            ( `  .  )  )      \
                             `--(___.-'\_____+---+    (small-cell sites)
                                   \         |SCe|__Y
                                  +---+      +---+
                               Y__|eNB|__Y
                                  +---+
                                Y_/   \_Y ("local" radios)

        Figure 10: Generic 3GPP-Based Cellular Network Architecture

   In Figure 10, "eNB" ("E-UTRAN Node B") is the hardware that is
   connected to the mobile phone network and enables the mobile phone
   network to communicate with mobile handsets [TS36300].  ("E-UTRAN"
   stands for "Evolved Universal Terrestrial Radio Access Network".)

6.1.2. Delay Constraints

The available processing time for Fronthaul networking overhead is limited to the available time after the baseband processing of the radio frame has completed. For example, in Long Term Evolution (LTE) radio, 3 ms is allocated for the processing of a radio frame, but typically the baseband processing uses most of it, allowing only a small fraction to be used by the Fronthaul network. In this example, out of 3 ms, the maximum time allocated to the Fronthaul network for one-way delay is 250 us, and the existing specification [NGMN-Fronth] specifies a maximum delay of only 100 us. This ultimately determines the distance the RRHs can be located from the base stations (e.g., 100 us equals roughly 20 km of optical fiber-based transport). Allocation options regarding the available time budget between processing and transport are currently undergoing heavy discussion in the mobile industry.
Top   ToC   RFC8578 - Page 56
   For packet-based transport, the allocated transport time between the
   RRH and the BBU is consumed by node processing, buffering, and
   distance-incurred delay.  An example of the allocated transport time
   is 100 us (from the Common Public Radio Interface [CPRI]).

   The baseband processing time and the available "delay budget" for the
   Fronthaul is likely to change in the forthcoming "5G" due to reduced
   radio round-trip times and other architectural and service
   requirements [NGMN].

   The transport time budget, as noted above, places limitations on the
   distance that RRHs can be located from base stations (i.e., the link
   length).  In the above analysis, it is assumed that the entire
   transport time budget is available for link propagation delay.
   However, the transport time budget can be broken down into three
   components: scheduling/queuing delay, transmission delay, and link
   propagation delay.  Using today's Fronthaul networking technology,
   the queuing, scheduling, and transmission components might become the
   dominant factors in the total transport time, rather than the link
   propagation delay.  This is especially true in cases where the
   Fronthaul link is relatively short and is shared among multiple
   Fronthaul flows -- for example, in indoor and small-cell networks,
   massive Multiple Input Multiple Output (MIMO) antenna networks, and
   split Fronthaul architectures.

   DetNet technology can improve Fronthaul networks by controlling and
   reducing the time required for the queuing, scheduling, and
   transmission operations by properly assigning network resources, thus
   (1) leaving more of the transport time budget available for link
   propagation and (2) enabling longer link lengths.  However, link
   length is usually a predetermined parameter and is not a controllable
   network parameter, since RRH and BBU sites are usually located in
   predetermined locations.  However, the number of antennas in an RRH
   site might increase -- for example, by adding more antennas,
   increasing the MIMO capability of the network, or adding support for
   massive MIMO.  This means increasing the number of Fronthaul flows
   sharing the same Fronthaul link.  DetNet can now control the
   bandwidth assignment of the Fronthaul link and the scheduling of
   Fronthaul packets over this link and can provide adequate buffer
   provisioning for each flow to reduce the packet loss rate.

   Another way in which DetNet technology can aid Fronthaul networks is
   by providing effective isolation between flows -- for example,
   between flows originating in different slices within a network-sliced
   5G network.  Note, however, that this isolation applies to DetNet
   flows for which resources have been preallocated, i.e., it does not
   apply to best-effort flows within a DetNet.  DetNet technology can
   also dynamically control the bandwidth-assignment, scheduling, and
Top   ToC   RFC8578 - Page 57
   packet-forwarding decisions, as well as the buffer provisioning of
   the Fronthaul flows to guarantee the end-to-end delay of the
   Fronthaul packets and minimize the packet loss rate.

   [METIS] documents the fundamental challenges as well as overall
   technical goals of the future 5G mobile and wireless systems as the
   starting point.  These future systems should support much higher data
   volumes and rates and significantly lower end-to-end latency for 100x
   more connected devices (at cost and energy-consumption levels similar
   to today's systems).

   For Midhaul connections, delay constraints are driven by inter-site
   radio functions such as Coordinated Multi-Point (CoMP) processing
   (see [CoMP]).  CoMP reception and transmission constitute a framework
   in which multiple geographically distributed antenna nodes cooperate
   to improve performance for the users served in the common cooperation
   area.  The design principle of CoMP is to extend single-cell-to-
   multi-UE (User Equipment) transmission to a multi-cell-to-multi-UE
   transmission via cooperation among base stations.

   CoMP has delay-sensitive performance parameters: "Midhaul latency"
   and "CSI (Channel State Information) reporting and accuracy".  The
   essential feature of CoMP is signaling between eNBs, so Midhaul
   latency is the dominating limitation of CoMP performance.  Generally,
   CoMP can benefit from coordinated scheduling (either distributed or
   centralized) of different cells if the signaling delay between eNBs
   is within 1-10 ms.  This delay requirement is both rigid and
   absolute, because any uncertainty in delay will degrade performance
   significantly.

   Inter-site CoMP is one of the key requirements for 5G and is also a
   goal for 4.5G network architectures.

6.1.3. Time-Synchronization Constraints

Fronthaul time-synchronization requirements are given by [TS25104], [TS36104], [TS36211], and [TS36133]. These can be summarized for the 3GPP LTE-based networks as: Delay accuracy: +-8 ns (i.e., +-1/32 Tc, where Tc is the Universal Mobile Telecommunications System (UMTS) Chip time of 1/3.84 MHz), resulting in a round-trip accuracy of +-16 ns. The value is this low in order to meet the 3GPP Timing Alignment Error (TAE) measurement requirements. Note that performance guarantees of low-nanosecond values such as these are considered to be below the DetNet layer -- it is assumed that the underlying implementation (e.g., the hardware) will provide sufficient support (e.g.,
Top   ToC   RFC8578 - Page 58
      buffering) to enable this level of accuracy.  These values are
      maintained in the use case to give an indication of the overall
      application.

   TAE:
      TAE is problematic for Fronthaul networks and must be minimized.
      If the transport network cannot guarantee TAE levels that are low
      enough, then additional buffering has to be introduced at the
      edges of the network to buffer out the jitter.  Buffering is not
      desirable, as it reduces the total available delay budget.

      Packet Delay Variation (PDV) requirements can be derived from TAE
      measurements for packet-based Fronthaul networks.

      *  For MIMO or TX diversity transmissions, at each carrier
         frequency, TAE measurements shall not exceed 65 ns (i.e.,
         1/4 Tc).

      *  For intra-band contiguous carrier aggregation, with or without
         MIMO or TX diversity, TAE measurements shall not exceed 130 ns
         (i.e., 1/2 Tc).

      *  For intra-band non-contiguous carrier aggregation, with or
         without MIMO or TX diversity, TAE measurements shall not exceed
         260 ns (i.e., 1 Tc).

      *  For inter-band carrier aggregation, with or without MIMO or TX
         diversity, TAE measurements shall not exceed 260 ns.

   Transport link contribution to radio frequency errors:
      +-2 PPB.  This value is considered to be "available" for the
      Fronthaul link out of the total 50 PPB budget reserved for the
      radio interface.  Note that the transport link contributes to
      radio frequency errors for the following reason: at the time of
      this writing, Fronthaul communication is direct communication from
      the radio unit to the RRH.  The RRH is essentially a passive
      device (e.g., without buffering).  The transport drives the
      antenna directly by feeding it with samples, and everything the
      transport adds will be introduced to the radio "as is".  So, if
      the transport causes any additional frequency errors, the errors
      will show up immediately on the radio as well.  Note that
      performance guarantees of low-nanosecond values such as these are
      considered to be below the DetNet layer -- it is assumed that the
      underlying implementation (e.g., the hardware) will provide
      sufficient support to enable this level of performance.  These
      values are maintained in the use case to give an indication of the
      overall application.
Top   ToC   RFC8578 - Page 59
   The above-listed time-synchronization requirements are difficult to
   meet with point-to-point connected networks and are more difficult to
   meet when the network includes multiple hops.  It is expected that
   networks must include buffering at the ends of the connections as
   imposed by the jitter requirements, since trying to meet the jitter
   requirements in every intermediate node is likely to be too costly.
   However, every measure to reduce jitter and delay on the path makes
   it easier to meet the end-to-end requirements.

   In order to meet the timing requirements, both senders and receivers
   must remain time synchronized, demanding very accurate clock
   distribution -- for example, support for IEEE 1588 transparent clocks
   or boundary clocks in every intermediate node.

   In cellular networks from the LTE radio era onward, phase
   synchronization is needed in addition to frequency synchronization
   [TS36300] [TS23401].  Time constraints are also important due to
   their impact on packet loss.  If a packet is delivered too late, then
   the packet may be dropped by the host.

6.1.4. Transport-Loss Constraints

Fronthaul and Midhaul networks assume that transport is almost error free. Errors can cause a reset of the radio interfaces, in turn causing reduced throughput or broken radio connectivity for mobile customers. For packetized Fronthaul and Midhaul connections, packet loss may be caused by BER, congestion, or network failure scenarios. Different Fronthaul "functional splits" are being considered by 3GPP, requiring strict Frame Loss Ratio (FLR) guarantees. As one example (referring to the legacy CPRI split, which is option 8 in 3GPP), lower-layer splits may imply an FLR of less than 10^-7 for data traffic and less than 10^-6 for control and management traffic. Many of the tools available for eliminating packet loss for Fronthaul and Midhaul networks have serious challenges; for example, retransmitting lost packets or using FEC to circumvent bit errors (or both) is practically impossible, due to the additional delay incurred. Using redundant streams for better guarantees of delivery is also practically impossible in many cases, due to high bandwidth requirements for Fronthaul and Midhaul networks. Protection switching is also a candidate, but at the time of this writing, available technologies for the path switch are too slow to avoid a reset of mobile interfaces.
Top   ToC   RFC8578 - Page 60
   It is assumed that Fronthaul links are symmetric.  All Fronthaul
   streams (i.e., those carrying radio data) have equal priority and
   cannot delay or preempt each other.

   All of this implies that it is up to the network to guarantee that
   each time-sensitive flow meets its schedule.

6.1.5. Cellular Radio Network Security Considerations

Establishing time-sensitive streams in the network entails reserving networking resources for long periods of time. It is important that these reservation requests be authenticated to prevent malicious reservation attempts from hostile nodes (or accidental misconfiguration). This is particularly important in the case where the reservation requests span administrative domains. Furthermore, the reservation information itself should be digitally signed to reduce the risk of a legitimate node pushing a stale or hostile configuration into another networking node. Note: This is considered important for the security policy of the network but does not affect the core DetNet architecture and design.

6.2. Cellular Radio Networks Today

6.2.1. Fronthaul

Today's Fronthaul networks typically consist of: o Dedicated point-to-point fiber connection (common) o Proprietary protocols and framings o Custom equipment and no real networking At the time of this writing, solutions for Fronthaul are direct optical cables or Wavelength-Division Multiplexing (WDM) connections.

6.2.2. Midhaul and Backhaul

Today's Midhaul and Backhaul networks typically consist of: o Mostly normal IP networks, MPLS-TP, etc. o Clock distribution and synchronization using IEEE 1588 and syncE Telecommunications networks in the Midhaul and Backhaul are already heading towards transport networks where precise time-synchronization support is one of the basic building blocks. In order to meet
Top   ToC   RFC8578 - Page 61
   bandwidth and cost requirements, most transport networks have already
   transitioned to all-IP packet-based networks; however, highly
   accurate clock distribution has become a challenge.

   In the past, Midhaul and Backhaul connections were typically based on
   TDM and provided frequency-synchronization capabilities as a part of
   the transport media.  More recently, other technologies such as GPS
   or syncE [syncE] have been used.

   Ethernet, IP/MPLS [RFC3031], and pseudowires (as described in
   [RFC3985] ("Pseudo Wire Emulation Edge-to-Edge (PWE3) Architecture")
   for legacy transport support)) have become popular tools for building
   and managing new all-IP Radio Access Networks (RANs)
   [SR-IP-RAN-Use-Case].  Although various timing and synchronization
   optimizations have already been proposed and implemented, including
   PTP enhancements [IEEE-1588] (see also [Timing-over-MPLS] and
   [RFC8169]), these solutions are not necessarily sufficient for the
   forthcoming RAN architectures, nor do they guarantee the more
   stringent time-synchronization requirements such as [CPRI].

   Existing solutions for TDM over IP include those discussed in
   [RFC4553], [RFC5086], and [RFC5087]; [MEF8] addresses TDM over
   Ethernet transports.

6.3. Cellular Radio Networks in the Future

Future cellular radio networks will be based on a mix of different xHaul networks (xHaul = Fronthaul, Midhaul, and Backhaul), and future transport networks should be able to support all of them simultaneously. It is already envisioned today that: o Not all "cellular radio network" traffic will be IP; for example, some will remain at Layer 2 (e.g., Ethernet based). DetNet solutions must address all traffic types (Layer 2 and Layer 3) with the same tools and allow their transport simultaneously. o All types of xHaul networks will need some types of DetNet solutions. For example, with the advent of 5G, some Backhaul traffic will also have DetNet requirements (for example, traffic belonging to time-critical 5G applications). o Different functional splits between the base stations and the on-site units could coexist on the same Fronthaul and Backhaul network.
Top   ToC   RFC8578 - Page 62
   Future cellular radio networks should contain the following:

   o  Unified standards-based transport protocols and standard
      networking equipment that can make use of underlying deterministic
      link-layer services

   o  Unified and standards-based network management systems and
      protocols in all parts of the network (including Fronthaul)

   New RAN deployment models and architectures may require TSN services
   with strict requirements on other parts of the network that
   previously were not considered to be packetized at all.  Time and
   synchronization support are already topical for Backhaul and Midhaul
   packet networks [MEF22.1.1] and are also becoming a real issue for
   Fronthaul networks.  Specifically, in Fronthaul networks, the timing
   and synchronization requirements can be extreme for packet-based
   technologies -- for example, on the order of a PDV of +-20 ns or less
   and frequency accuracy of +-0.002 PPM [Fronthaul].

   The actual transport protocols and/or solutions for establishing
   required transport "circuits" (pinned-down paths) for Fronthaul
   traffic are still undefined.  Those protocols are likely to include
   (but are not limited to) solutions directly over Ethernet, over IP,
   and using MPLS/pseudowire transport.

   Interesting and important work for TSN has been done for Ethernet
   [IEEE-8021TSNTG]; this work specifies the use of PTP [IEEE-1588] in
   the context of IEEE 802.1D and IEEE 802.1Q.  [IEEE-8021AS] specifies
   a Layer 2 time-synchronizing service, and other specifications such
   as IEEE 1722 [IEEE-1722] specify Ethernet-based Layer 2 transport for
   time-sensitive streams.

   However, even these Ethernet TSN features may not be sufficient for
   Fronthaul traffic.  Therefore, having specific profiles that take
   Fronthaul requirements into account is desirable [IEEE-8021CM].

   New promising work seeks to enable the transport of time-sensitive
   Fronthaul streams in Ethernet bridged networks [IEEE-8021CM].
   Analogous to IEEE 1722, standardization efforts in the IEEE 1914.3
   Task Force [IEEE-19143] to define the Layer 2 transport encapsulation
   format for transporting Radio over Ethernet (RoE) are ongoing.

   As mentioned in Section 6.1.2, 5G communications will provide one of
   the most challenging cases for delay-sensitive networking.  In order
   to meet the challenges of ultra-low latency and ultra-high
   throughput, 3GPP has studied various functional splits for 5G, i.e.,
   physical decomposition of the 5G "gNodeB" base station and deployment
   of its functional blocks in different locations [TR38801].
Top   ToC   RFC8578 - Page 63
   These splits are numbered from split option 1 (dual connectivity, a
   split in which the radio resource control is centralized and other
   radio stack layers are in distributed units) to split option 8 (a
   PHY-RF split in which RF functionality is in a distributed unit and
   the rest of the radio stack is in the centralized unit), with each
   intermediate split having its own data-rate and delay requirements.
   Packetized versions of different splits have been proposed, including
   enhanced CPRI (eCPRI) [eCPRI] and RoE (as previously noted).  Both
   provide Ethernet encapsulations, and eCPRI is also capable of IP
   encapsulation.

   All-IP RANs and xHaul networks would benefit from time
   synchronization and time-sensitive transport services.  Although
   Ethernet appears to be the unifying technology for the transport,
   there is still a disconnect when it comes to providing Layer 3
   services.  The protocol stack typically has a number of layers below
   Ethernet Layer 2 that might be "visible" to Layer 3.  In a fairly
   common scenario, on top of the lowest-layer (optical) transport is
   the first (lowest) Ethernet layer, then one or more layers of MPLS,
   pseudowires, and/or other tunneling protocols, and finally one or
   more Ethernet layers that are visible to Layer 3.

   Although there exist technologies for establishing circuits through
   the routed and switched networks (especially in the MPLS/PWE space),
   there is still no way to signal the time-synchronization and
   time-sensitive stream requirements/reservations for Layer 3 flows in
   a way that addresses the entire transport stack, including the
   Ethernet layers that need to be configured.

   Furthermore, not all "user-plane" traffic will be IP.  Therefore, the
   solution in question also must address the use cases where the
   user-plane traffic is on a different layer (for example, Ethernet
   frames).
Top   ToC   RFC8578 - Page 64

6.4. Cellular Radio Networks Requests to the IETF

A standard for data-plane transport specifications that is: o Unified among all xHauls (meaning that different flows with diverse DetNet requirements can coexist in the same network and traverse the same nodes without interfering with each other) o Deployed in a highly deterministic network environment o Capable of supporting multiple functional splits simultaneously, including existing Backhaul and CPRI Fronthaul, and (potentially) new modes as defined, for example, in 3GPP; these goals can be supported by the existing DetNet use case "common themes" (Section 11); of special note are Sections 11.1.8 ("Mix of Deterministic and Best-Effort Traffic"), 11.3.1 ("Bounded Latency"), 11.3.2 ("Low Latency"), 11.3.4 ("Symmetrical Path Delays"), and 11.6 ("Deterministic Flows") o Capable of supporting network slicing and multi-tenancy; these goals can be supported by the same DetNet themes noted above o Capable of transporting both in-band and out-of-band control traffic (e.g., Operations, Administration, and Maintenance (OAM) information) o Deployable over multiple data-link technologies (e.g., IEEE 802.3, mmWave) A standard for data-flow information models that is: o Aware of the time sensitivity and constraints of the target networking environment o Aware of underlying deterministic networking services (e.g., on the Ethernet layer)

7. Industrial Machine to Machine (M2M)

7.1. Use Case Description

"Industrial automation" in general refers to automation of manufacturing, quality control, and material processing. This M2M use case focuses on machine units on a plant floor that periodically exchange data with upstream or downstream machine modules and/or a supervisory controller within a LAN.
Top   ToC   RFC8578 - Page 65
   PLCs are the "actors" in M2M communications.  Communication between
   PLCs, and between PLCs and the supervisory PLC (S-PLC), is achieved
   via critical control/data streams (Figure 11).

              S (Sensor)
               \                                  +-----+
         PLC__  \.--.                   .--.   ---| MES |
              \_(    `.               _(    `./   +-----+
       A------( Local  )-------------(  L2    )
             (      Net )           (      Net )    +-------+
             /`--(___.-'             `--(___.-' ----| S-PLC |
          S_/     /       PLC   .--. /              +-------+
               A_/           \_(    `.
            (Actuator)       (  Local )
                            (       Net )
                             /`--(___.-'\
                            /       \    A
                           S         A

      Figure 11: Current Generic Industrial M2M Network Architecture

   This use case focuses on PLC-related communications; communication to
   Manufacturing Execution Systems (MESs) are not addressed.

   This use case covers only critical control/data streams; non-critical
   traffic between industrial automation applications (such as
   communication of state, configuration, setup, and database
   communication) is adequately served by prioritizing techniques
   available at the time of this writing.  Such traffic can use up to
   80% of the total bandwidth required.  There is also a subset of
   non-time-critical traffic that must be reliable even though it is not
   time sensitive.

   In this use case, deterministic networking is primarily needed to
   provide end-to-end delivery of M2M messages within specific timing
   constraints -- for example, in closed-loop automation control.
   Today, this level of determinism is provided by proprietary
   networking technologies.  In addition, standard networking
   technologies are used to connect the local network to remote
   industrial automation sites, e.g., over an enterprise or metro
   network that also carries other types of traffic.  Therefore, flows
   that should be forwarded with deterministic guarantees need to be
   sustained, regardless of the amount of other flows in those networks.
Top   ToC   RFC8578 - Page 66

7.2. Industrial M2M Communications Today

Today, proprietary networks fulfill the needed timing and availability for M2M networks. The network topologies used today by industrial automation are similar to those used by telecom networks: daisy chain, ring, hub-and-spoke, and "comb" (a subset of daisy chain). PLC-related control/data streams are transmitted periodically and carry either a preconfigured payload or a payload configured during runtime. Some industrial applications require time synchronization at the end nodes. For such time-coordinated PLCs, accuracy of 1 us is required. Even in the case of "non-time-coordinated" PLCs, time synchronization may be needed, e.g., for timestamping of sensor data. Industrial-network scenarios require advanced security solutions. At the time of this writing, many industrial production networks are physically separated. Filtering policies that are typically enforced in firewalls are used to prevent critical flows from being leaked outside a domain.

7.2.1. Transport Parameters

The cycle time defines the frequency of message(s) between industrial actors. The cycle time is application dependent, in the range of 1-100 ms for critical control/data streams. Because industrial applications assume that deterministic transport will be used for critical control-data-stream parameters (instead of having to define latency and delay-variation parameters), it is sufficient to fulfill requirements regarding the upper bound of latency (maximum latency). The underlying networking infrastructure must ensure a maximum end-to-end message delivery time in the range of 100 us to 50 ms, depending on the control-loop application. The bandwidth requirements of control/data streams are usually calculated directly from the bytes-per-cycle parameter of the control loop. For PLC-to-PLC communication, one can expect 2-32 streams with packet sizes in the range of 100-700 bytes. For S-PLC-to-PLC communication, the number of streams is higher -- up to 256 streams. Usually, no more than 20% of available bandwidth is used for critical control/data streams. In today's networks, 1 Gbps links are commonly used.
Top   ToC   RFC8578 - Page 67
   Most PLC control loops are rather tolerant of packet loss; however,
   critical control/data streams accept a loss of no more than one
   packet per consecutive communication cycle (i.e., if a packet gets
   lost in cycle "n", then the next cycle ("n+1") must be lossless).
   After the loss of two or more consecutive packets, the network may be
   considered to be "down" by the application.

   As network downtime may impact the whole production system, the
   required network availability is rather high (99.999%).

   Based on the above parameters, some form of redundancy will be
   required for M2M communications; however, any individual solution
   depends on several parameters, including cycle time and
   delivery time.

7.2.2. Stream Creation and Destruction

In an industrial environment, critical control/data streams are created rather infrequently, on the order of ~10 times per day/week/month. Most of these critical control/data streams get created at machine startup; however, flexibility is also needed during runtime -- for example, when adding or removing a machine. As production systems become more flexible going forward, there will be a significant increase in the rate at which streams are created, changed, and destroyed.

7.3. Industrial M2M in the Future

We foresee a converged IP-standards-based network with deterministic properties that can satisfy the timing, security, and reliability constraints described above. Today's proprietary networks could then be interfaced to such a network via gateways; alternatively, in the case of new installations, devices could be connected directly to the converged network. For this use case, time-synchronization accuracy on the order of 1 us is expected.

7.4. Industrial M2M Requests to the IETF

o Converged IP-based network o Deterministic behavior (bounded latency and jitter) o High availability (presumably through redundancy) (99.999%) o Low message delivery time (100 us to 50 ms)
Top   ToC   RFC8578 - Page 68
   o  Low packet loss (with a bounded number of consecutive lost
      packets)

   o  Security (e.g., preventing critical flows from being leaked
      between physically separated networks)

8. Mining Industry

8.1. Use Case Description

The mining industry is highly dependent on networks to monitor and control their systems, in both open-pit and underground extraction as well as in transport and refining processes. In order to reduce risks and increase operational efficiency in mining operations, the location of operators has been relocated (as much as possible) from the extraction site to remote control and monitoring sites. In the case of open-pit mining, autonomous trucks are used to transport the raw materials from the open pit to the refining factory where the final product (e.g., copper) is obtained. Although the operation is autonomous, the tracks are remotely monitored from a central facility. In pit mines, the monitoring of the tailings or mine dumps is critical in order to minimize environmental pollution. In the past, monitoring was conducted through manual inspection of preinstalled dataloggers. Cabling is not typically used in such scenarios, due to its high cost and complex deployment requirements. At the time of this writing, wireless technologies are being employed to monitor these cases permanently. Slopes are also monitored in order to anticipate possible mine collapse. Due to the unstable terrain, cable maintenance is costly and complex; hence, wireless technologies are employed. In the case of underground monitoring, autonomous vehicles with extraction tools travel independently through the tunnels, but their operational tasks (such as excavation, stone-breaking, and transport) are controlled remotely from a central facility. This generates upstream video and feedback traffic plus downstream actuator-control traffic.

8.2. Mining Industry Today

At the time of this writing, the mining industry uses a packet-switched architecture supported by high-speed Ethernet. However, in order to comply with requirements regarding delay and packet loss, the network bandwidth is overestimated. This results in very low efficiency in terms of resource usage.
Top   ToC   RFC8578 - Page 69
   QoS is implemented at the routers to separate video, management,
   monitoring, and process-control traffic for each stream.

   Since mobility is involved in this process, the connections between
   the backbone and the mobile devices (e.g., trucks, trains, and
   excavators) are implemented using a wireless link.  These links are
   based on IEEE 802.11 [IEEE-80211] for open-pit mining and "leaky
   feeder" communications for underground mining.  (A "leaky feeder"
   communication system consists of a coaxial cable, run along tunnels,
   that emits and receives radio waves, functioning as an extended
   antenna.  The cable is "leaky" in that it has gaps or slots in its
   outer conductor to allow the radio signal to leak into or out of the
   cable along its entire length.)

   Lately, in pit mines the use of Low-Power WAN (LPWAN) technologies
   has been extended: tailings, slopes, and mine dumps are monitored by
   battery-powered dataloggers that make use of robust long-range radio
   technologies.  Reliability is usually ensured through retransmissions
   at Layer 2.  Gateways or concentrators act as bridges, forwarding the
   data to the backbone Ethernet network.  Deterministic requirements
   are biased towards reliability rather than latency, as events are
   triggered slowly or can be anticipated in advance.

   At the mineral-processing stage, conveyor belts and refining
   processes are controlled by a SCADA system that provides an
   in-factory delay-constrained networking environment.

   At the time of this writing, voice communications are served by a
   redundant trunking infrastructure, independent from data networks.

8.3. Mining Industry in the Future

Mining operations and management are converging towards a combination of autonomous operation and teleoperation of transport and extraction machines. This means that video, audio, monitoring, and process- control traffic will increase dramatically. Ideally, all activities at the mine will rely on network infrastructure. Wireless for open-pit mining is already a reality with LPWAN technologies; it is expected to evolve to more-advanced LPWAN technologies, such as those based on LTE, to increase last-hop reliability or novel LPWAN flavors with deterministic access. One area in which DetNet can improve this use case is in the wired networks that make up the "backbone network" of the system. These networks connect many wireless Access Points (APs) together. The mobile machines (which are connected to the network via wireless)
Top   ToC   RFC8578 - Page 70
   transition from one AP to the next as they move about.  A
   deterministic, reliable, low-latency backbone can enable these
   transitions to be more reliable.

   Connections that extend all the way from the base stations to the
   machinery via a mix of wired and wireless hops would also be
   beneficial -- for example, to improve the responsiveness of digging
   machines to remote control.  However, to guarantee deterministic
   performance of a DetNet, the end-to-end underlying network must be
   deterministic.  Thus, for this use case, if a deterministic wireless
   transport is integrated with a wire-based DetNet network, it could
   create the desired wired plus wireless end-to-end deterministic
   network.

8.4. Mining Industry Requests to the IETF

o Improved bandwidth efficiency o Very low delay, to enable machine teleoperation o Dedicated bandwidth usage for high-resolution video streams o Predictable delay, to enable real-time monitoring o Potential for constructing a unified DetNet network over a combination of wired and deterministic wireless links

9. Private Blockchain

9.1. Use Case Description

Blockchain was created with Bitcoin as a "public" blockchain on the open Internet; however, blockchain has also spread far beyond its original host into various industries, such as smart manufacturing, logistics, security, legal rights, and others. In these industries, blockchain runs in designated and carefully managed networks in which deterministic networking requirements could be addressed by DetNet. Such implementations are referred to as "private" blockchain. The sole distinction between public and private blockchain is defined by who is allowed to participate in the network, execute the consensus protocol, and maintain the shared ledger. Today's networks manage the traffic from blockchain on a best-effort basis, but blockchain operation could be made much more efficient if deterministic networking services were available to minimize latency and packet loss in the network.
Top   ToC   RFC8578 - Page 71

9.1.1. Blockchain Operation

A "block" runs as a container of a batch of primary items (e.g., transactions, property records). The blocks are chained in such a way that the hash of the previous block works as the pointer to the header of the new block. Confirmation of each block requires a consensus mechanism. When an item arrives at a blockchain node, the latter broadcasts this item to the rest of the nodes, which receive it, verify it, and put it in the ongoing block. The block confirmation process begins as the number of items reaches the predefined block capacity, at which time the node broadcasts its proved block to the rest of the nodes, to be verified and chained. The result is that block N+1 of each chain transitively vouches for blocks N and previous of that chain.

9.1.2. Blockchain Network Architecture

Blockchain node communication and coordination are achieved mainly through frequent point-to-multipoint communication; however, persistent point-to-point connections are used to transport both the items and the blocks to the other nodes. For example, consider the following implementation. When a node is initiated, it first requests the other nodes' addresses from a specific entity, such as DNS. The node then creates persistent connections with each of the other nodes. If a node confirms an item, it sends the item to the other nodes via these persistent connections. As a new block in a node is completed and is proven by the surrounding nodes, it propagates towards its neighbor nodes. When node A receives a block, it verifies it and then sends an invite message to its neighbor B. Neighbor B checks to see if the designated block is available and responds to A if it is unavailable; A then sends the complete block to B. B repeats the process (as was done by A) to start the next round of block propagation. The challenge of blockchain network operation is not overall data rates, since the volume from both the block and the item stays between hundreds of bytes and a couple of megabytes per second; rather, the challenge is in transporting the blocks with minimum latency to maximize the efficiency of the blockchain consensus process. The efficiency of differing implementations of the consensus process may be affected to a differing degree by the latency (and variation of latency) of the network.
Top   ToC   RFC8578 - Page 72

9.1.3. Blockchain Security Considerations

Security is crucial to blockchain applications; at the time of this writing, blockchain systems address security issues mainly at the application level, where cryptography as well as hash-based consensus play a leading role in preventing both double-spending and malicious service attacks. However, there is concern that in the proposed use case for a private blockchain network that is dependent on deterministic properties the network could be vulnerable to delays and other specific attacks against determinism, as these delays and attacks could interrupt service.

9.2. Private Blockchain Today

Today, private blockchain runs in Layer 2 or Layer 3 VPNs, generally without guaranteed determinism. The industry players are starting to realize that improving determinism in their blockchain networks could improve the performance of their service, but at present these goals are not being met.

9.3. Private Blockchain in the Future

Blockchain system performance can be greatly improved through deterministic networking services, primarily because low latency would accelerate the consensus process. It would be valuable to be able to design a private blockchain network with the following properties: o Transport of point-to-multipoint traffic in a coordinated network architecture rather than at the application layer (which typically uses point-to-point connections) o Guaranteed transport latency o Reduced packet loss (to the point where delay incurred by packet retransmissions would be negligible)

9.4. Private Blockchain Requests to the IETF

o Layer 2 and Layer 3 multicast of blockchain traffic o Item and block delivery with bounded, low latency and negligible packet loss o Coexistence of blockchain and IT traffic in a single network o Ability to scale the network by distributing the centralized control of the network across multiple control entities


(next page on part 6)

Next Section