Tech-invite  3GPPspecsRELsGlossariesSIP
Info21222324252627282931323334353637384‑5x

full Contents for  TS 22.261  Word version:   17.2.0

Top   Up   Prev   Next
0…   4…   6…   6.4…   6.8…   6.12…   6.22…   6.26…   7…   8…   A   B   C   D…   F…

 

D  Critical-communication use casesWord-p. 69
D.1  Discrete automation - motion control
Industrial factory automation requires communications for closed-loop control applications. Examples for such applications are motion control of robots, machine tools, as well as packaging and printing machines. All other discrete-automation applications are addressed in Annex D.2.
The corresponding industrial communication solutions are referred to as fieldbuses. The pertinent standard suite is IEC 61158. Note that clock synchronization is an integral part of fieldbuses used for motion control.
In motion control applications, a controller interacts with a large number of sensors and actuators (e.g. up to 100), which are integrated in a manufacturing unit. The resulting sensor/actuator density is often very high (up to 1 m-3). Many such manufacturing units may have to be supported within close proximity within a factory (e.g. up to 100 in automobile assembly line production).
In a closed-loop control application, the controller periodically submits instructions to a set of sensor/actuator devices, which return a response within a cycle time. The messages, referred to as telegrams, are typically small (≤ 56 bytes). The cycle time can be as low as 2 ms, setting stringent end-to-end latency constraints on telegram forwarding (1 ms). Additional constraints on isochronous telegram delivery add tight constraints on jitter (1 μs), and the communication service has also to be highly available (99,9999%).
Multi-robot cooperation is a case in closed-loop control where a group of robots collaborate to conduct an action, for example, symmetrical welding of a car body to minimize deformation. This requires isochronous operation between all robots. For multi-robot cooperation, the jitter (1μs) is among the command messages of a control event to the group robots.
To meet the stringent requirements of closed-loop factory automation, the following considerations may have to be taken:
  • Limitation to short-range communications.
  • Use of direct device connection between the controller and actuators.
  • Allocation of licensed spectrum for closed-loop control operations. Licensed spectrum may further be used as a complement to unlicensed spectrum, e.g. to enhance reliability.
  • Reservation of dedicated air-interface resources for each link.
  • Combination of multiple diversity techniques to approach the high reliability target within stringent end-to-end latency constraints such as frequency, antenna, and various forms of spatial diversity, e.g. via relaying
  • Utilizing OTA time synchronization to satisfy jitter constraints for isochronous operation.
A typical industrial closed-loop motion control application is based on individual control events. Each closed-loop control event consists of a downlink transaction followed by a synchronous uplink transaction, both of which are executed within a cycle time. Control events within a manufacturing unit may have to occur isochronously. Factory automation considers application-layer transaction cycles between controller devices and sensor/actuator devices. Each transaction cycle consists of (1) a command sent by the controller to the sensor/actuator (downlink), (2) application-layer processing on the sensor/actuator device, and (3) a subsequent response by the sensor/actuator to the controller (uplink). Cycle time includes the entire transaction from the transmission of a command by the controller to the reception of a response by the controller. It includes all lower layer processes and latencies on the air interface as well the application-layer processing time on the sensor/actuator.
Up
Figure D.1-1 depicts how communication may occur in factory automation. In this use case, communication is confined to local controller-to-sensor/actuator interaction within each manufacturing unit. Repeaters may provide spatial diversity to enhance reliability.
D.1.1  Service area and connection densityWord-p. 70
The maximum service volume in motion control is currently set by hoisting solutions, i.e. cranes, and by the manipulation of large machine components, e.g. propeller blades of wind-energy generators. Cranes can be rather wide and quite high above the shop floor, even within a factory hall. In addition, they typically travel along an entire factory hall.
An approximate dimension of the service area is 100 x 100 x 30 m.
Note that production cells are commonly much smaller (< 10 x 10 x 3 m). There are typically about 10 motion-control connections in a production cell, which results in a connection density of up to 105 km-2.
Up
D.1.2  Security
Network access and authorization in an industrial factory deployment is typically provided and managed by the factory owner with its ID management, authentication, confidentiality and integrity.
Note that motion control telegrams usually are not encrypted due to stringent cycle time requirements.
A comprehensive security framework for factories has been described in IEC 62443.
D.2  Discrete automation
Discrete automation encompasses all types of production that result in discrete products: cars, chocolate bars, etc. Automation that addresses the control of flows and chemical reactions is referred to as process automation (see clause D.3). Discrete automation requires communications for supervisory and open-loop control applications, as well as process monitoring and tracking operations inside an industrial plant. In these applications, a large number of sensors distributed over the plant forward measurement data to process controllers on a periodic or event-driven base. Traditionally, wireline field bus technologies have been used to interconnect sensors and control equipment. Due to the sizable extension of a plant (up to10 km2), the large number of sensors, rotary joints, and the high deployment complexity of wired infrastructure, wireless solutions have made inroads into industrial process automation.
This use case requires support of a large number of sensor devices per plant as well as high communication service availability (99,99%). Furthermore, power consumption is relevant since some sensor devices are battery-powered with a targeted battery lifetime of several years while providing measurement updates every few seconds. Range also becomes a critical factor due to the low transmit power levels of the sensors, the large size of the plant and the high reliability requirements on transport. End-to-end latency requirements typically range between 10 ms and 1 s. User experienced data rates can be rather low since each transaction typically comprises less than 256 bytes. However, there has been a shift from field busses featuring somewhat modest data rates (~ 2 Mbit/s) to those with higher data rates (~ 10 Mbit/s) due to the increasing number of distributed applications and also "data-hungry" applications. An example for the latter is the visual control of production processes. For this application, the user experienced data rate is typically around 10 Mbit/s and the transmitted packets are much larger.
The existing wireless technologies rely on unlicensed bands. Communication is therefore vulnerable to interference caused by other technologies (e.g. WLAN). With the stringent requirements on transport reliability, such interference is detrimental to proper operation.
The use of licensed spectrum could overcome the vulnerability to same-band interference and therefore enable higher reliability. Utilization of licensed spectrum can be confined to those events where high interference bursts in unlicensed bands jeopardizes reliability and end-to-end latency constraints. This allows sharing the licensed spectrum between process automation and conventional mobile services.
Multi-hop topologies can provide range extension and mesh topologies can increase reliability through path redundancy. Clock synchronization will be highly beneficial since it enables more power-efficient sensor operation and mesh forwarding.
The corresponding industrial communication solutions are referred to as fieldbuses. The related standard suite is IEC 61158.
A typical discrete automation application supports downstream and upstream data flows between process controllers and sensors/actuators. The communication consists of individual transactions. The process controller resides in the plant network. This network interconnects via base stations to the wireless (mesh-) network which hosts the sensor/actuator devices. Typically, each transaction uses less than 256 bytes. An example of a controller-initiated transaction service flow is:
  1. The process controller requests sensor data (or an actuator to conduct actuation). The request is forwarded via the plant network and the wireless network to the sensors/actuators.
  2. The sensors/actuators process the request and send a replay in upstream direction to the controller. This reply may contain an acknowledgement or a measurement reading.
An example of a sensor/actuator device-initiated transaction service flow:
  1. The sensor sends a measurement reading to the process controller. The request is forwarded via the wireless (mesh) network and the plant network.
  2. The process controller may send an acknowledgement in opposite direction.
  3. For both controller- and sensor/actuator-initiated service flows, upstream and downstream transactions may occur asynchronously.
Figure D.2-1 depicts how communication may occur in discrete automation. In this use case, communication runs between process controller and sensor/actuator device via the plant network and the wireless (mesh) network. The wireless (mesh) network may also support access for handheld devices for supervisory control or process monitoring purposes.
Up
D.2.1  Service area and connection densityWord-p. 72
Factory halls can be rather large and even quite high. We set the upper limit at 1000 x 1000 x 30 m. Note that the connection density might vary quite a bit throughout factory halls. It is, for instance much higher along an assembly line than in an overflow buffer. Also, the density usually increases towards the factory floor. Typically, there is at least one connection per 10 m2, which results in a connection density of up to 105 km-2.
D.2.2  Security
Network access and authorization in an industrial factory deployment is typically provided and managed by the factory owner with its ID management, authentication, confidentiality and integrity.
A comprehensive security framework for factories has been described in IEC 62443.
D.3  Process automation
Process automation has much in common with discrete automation (see Annex D.2). Instead of discrete products (cars, chocolate bars, etc.), process automation addresses the production of bulk products such as petrol and reactive gases. In contrast to discrete automation, motion control is of limited or no importance. Typical end-to-end latencies are 50 ms. User experienced data rates, communication service availability, and connection density vary noticeably between applications. Below we describe one emerging use case (remote control via mobile computational units, see Annex D.3.1) and a contemporary use case (monitoring, see Annex D.3.2).
Note that discrete automation fieldbuses (see Annex D.2) are also used in process automation.
Up
D.3.1  Remote controlWord-p. 73
Some of the interactions within a plant are conducted by automated control applications similar to those described in Annex D.2. Here too, sensor output is requested in a cyclic fashion, and actuator commands are sent via the communication network between a controller and the actuator. Furthermore, there is an emerging need for the control of the plant from personnel on location. Typically, monitoring and managing of distributed control systems takes place in a dedicated control room.
Staff deployment to the plant itself occurs, for instance, during construction and commissioning of a plant and in the start-up phase of the processes. In this scenario, the locally deployed staff taps into the same real-time data as provided to the control room. These remote applications require high data rates (~ 100 Mps) since the staff on location needs to view inaccessible locations with high definition (e.g. emergency valves) and since their colleagues in the control room benefit from high-definition footage from body cameras (HD or even 4K).
For both kinds of applications, a very high communication service availability is needed (99,9999%). Typically, only a few control loops are fully automated and only handful of control personnel is deployed on location, so that the connection density is rather modest (~ 1000 km-2).
Up
D.3.2  Monitoring
The monitoring of states, e.g. the level of the liquid of process reactors, is a paramount task. Due to the ever-changing states, measurement data is either pulled or pushed from the sensors in a cyclic manner. Some sensors are more conveniently accessed via wireless links, and monitoring via handheld terminals of these sensors during, e.g. maintenance is also on the rise. This kind of application entails rather modest user experienced data rates (~ 1 Mps), and since this kind of data is "only" indicator for, e.g. what process should be stopped in order to avoid an overflow, and not for automated control loops, the requirement on communication service availability is comparably low (99,9%). Note that emergency valves and such typically are operated locally and do not rely on communication networks. However, many sensors are deployed in chemical plants etc., so that connection density can readily reach 10 000 km-2.
Up
D.3.3  Service area
While, for instance, chemical plants and refineries readily can span over several square kilometres, the dedicated control rooms are typically only responsible for a subset of that area. Such subsets are often referred to as plant, and their typical size is 300 x 300 x 50 m.
D.4  Electricity distribution
D.4.1  Medium voltage
An energy-automation domain that hitherto has only seen very little application of mobile-network technology is the backhaul network, i.e. the part of the distribution grid between primary substations (high voltage → medium voltage) and secondary substations (medium voltage → low voltage). In Figure D.4.1-1 we depict a medium-voltage ring together with energy-automation use cases that either are already deployed or are anticipated within the near future.
Up
The primary substation and the secondary substations are supervised and controlled by a distribution-management system (DMS). If energy-automation devices in the medium-voltage power line ring need to communicate with each other and /or the DMS, a wireless backhaul network needs to be present (orange "cloud" in Figure D.4.1-1).
A majority of applications in electricity distribution adhere to the communication standard IEC 60870-5-104, however, its modern "cousin", IEC 61850, experiences rapidly increasing popularity. The communication requirements for IEC 61850 applications can be found in EC 61850-90-4. Communication in wide-area networks is described in IEC 61850-90-12.
Usually, power line ring structures have to be open in order to avoid a power-imbalance in the ring (green dot in the Figure). Examples for energy-automation that already is implemented in medium-voltage grids (albeit in low numbers) are power-quality measurements and the measurement of secondary-substation parameters (temperature, power load, etc.) [13]. Other use cases are demand response and the control of distributed, renewable energy resources (e.g. photovoltaics).
A use case that could also be realised in the future is fault isolation and system restoration (FISR). FISR automates the management of faults in the distribution grid. It supports the localization of the fault, the isolation of the fault, and the restoration of the energy delivery. For this kind of automation, the pertinent sensors and actuators broadcast telegrams about their states (e.g. "emergency closer idle") and about actions (e.g. "activating closer") into the backhaul network. This information is used by the ring main units (RMUs) as input for their decision algorithms. We illustrate this use of automation telegrams for an automated FISR event in Figure D.4.1-1. Let us assume the distribution lines are cut at the location indicated by the bolt of lightning in the Figure. In that case, the RMUs between the bolt and the green load switch (open) will be without power. The RMUs next to the "bolt" automatically open their load switches after having sensed the loss of electric connectivity between them. They both broadcast these actions into the backhaul network. Typically, these telegrams are repeated many times while the time between adjacent telegrams increases exponentially. This communication patterns leads to sudden, distributed surges in the consumed communication bandwidth. After the RMUs next to the "bolt" have open their switch, the RMU that so far has kept the power line ring open (green dot in Figure D.4.1-1) close the load switch. This event too is broadcasted into the backhaul network. The typical maximum end-to-end latency for this kind of broadcast is 25 ms with a peak experienced data rate of 10 Mbit/s. Note that the distribution system tFigure D.4.1-1
Automatic fault handling in the distribution grid shortens outage time and offloads the operators in the distribution control centre for more complicated situations. Therefore, automated FISR may help to improve performance indexes like System Average Interruption Duration Index and System Average Interruption Frequency Index.
Automation telegrams are typically distributed via domain multicast. As explained above, the related communication pattern can be "bursty", i.e. only few automation telegrams are sent when the distribution network operates nominally (~ 1 kbit/s), but, for instance, a disruption in the power line triggers a short-lived avalanche of telegrams from related applications in the ring (≥ 1 Mbit/s).
Up
D.4.1.1  Service area and connection densityWord-p. 75
Service coverage is only required along the medium-voltage line. In Europe, the line often forms a loop (see figure D.4.1-1), while deployments in other countries, e.g. the USA, tend to extend linearly over distances up to ~ 100 km. The vertical dimension of the poles in a medium voltages line is typically less than 40 m. Especially in urban areas, the number of ring main units can be rather large (> 10 km-2), and the number of connections to each ring main unit is expected to increase swiftly once economical, suitable wireless connectivity becomes available. We predict connection densities of up to 1.000 km-2.
Up
D.4.1.2  Security
Due to its central role in virtually every country on earth, electricity distribution is heavily regulated. Security assessment for, e.g. deployments in North America, need to adhere to the NERC CIP suite [14]. Technical implementations are described in standard suites such as IEC 62351.
D.4.2  Energy distribution - high voltage
In order to avoid region- or even nation-wide power outages, wide-area power system protection is on the rise. "When a major power system disturbance occurs, protection and control actions are required to stop the power system degradation, restore the system to a normal state and minimize the impact of the disturbance. The present control actions are not designed for a fast-developing disturbance and may be too slow. Local protection systems are not able to consider the overall system, which may be affected by the disturbance. Wide area disturbance protection is a concept of using system-wide information and sending selected local information to a remote location to counteract propagation of the major disturbances in the power system." [15]. Protection actions include, "among others, changes in demand (e.g. load shedding), changes in generation or system configuration to maintain system stability or integrity and specific actions to maintain or restore acceptable voltage levels." [16]. One specific application is phasor measurement for the stabilisation of the alternating-current phase in a transport network. For this, the voltage phase is measured locally and sent to a remote-control centre. There, this information is processed and automated actions are triggered. One can be the submission of telegrams to power plants, instructing them to either accelerate or deaccelerate their power generators in order to keep the voltage phase in the transport network stable. A comprehensive overview of this topic can be found elsewhere in the literature [17].
This kind of automation requires very low end-to-end latencies (5 ms) [16] and-due to its critical importance for the operation of society-a very high communication service availability (99,9999%).
Up
D.4.2.1  Service area and connection density
As is the case for medium-voltage distribution networks (see Annex D.4.1), connectivity in high-voltage automation has to be provided mainly along the power line. The distances to be covered can be substantial (hundreds of kilometres in rural settings), while shorter links are prevalent in metropolitan areas. The number of connections in wide-area power system protection is rather low; but-due to the sliver-shaped service area-the connection density can be rather high (1000 km-2).
Up
D.4.2.2  Security
Due to its central role in virtually every country on earth, electricity distribution is heavily regulated. Security assessment for, e.g. deployments in North America, need to adhere to the NERC CIP suite [14]. Technical implementations are described in standard suites such as IEC 62351.
D.5  Intelligent transport systems - infrastructure backhaulWord-p. 76
Intelligent Transport Systems (ITS) embrace a wide variety of communications-related applications that are intended to increase travel safety, minimize environmental impact, improve traffic management, and maximize the benefits of transportation to both commercial users and the general public. Over recent years, the emphasis in intelligent vehicle research has turned to co-operative systems in which the traffic participants (vehicles, bicycles, pedestrians, etc.) communicate with each other and/or with the infrastructure.
Cooperative ITS is the term used to describe technology that allows vehicles to become connected to each other, and to the infrastructure and other parts of the transport network. In addition to what drivers can immediately see around them, and what vehicle sensors can detect, all parts of the transport system will increasingly be able to share information to improve decision making. Thus, this technology can improve road safety through avoiding collisions, but also assist in reducing congestion and improving traffic flows, and reduce environmental impacts. Once the basic technology is in place as a platform, an array of applications can be developed.
Cooperative ITS can greatly increase the quality and reliability of information available about vehicles, their location and the road environment. In the future, cars will know the location of road works and the switching phases of traffic lights ahead, and they will be able to react accordingly. This will make for safer and more convenient travel and faster arrival at the destination. On-board driver assistance, coupled with two-way communication between vehicles and between cars and road infrastructure, can help drivers to better control their vehicle and hence have positive effects in terms of safety and traffic efficiency. An important role in this plays the so-called road-side units (RSUs). Vehicles can also function as sensors reporting weather and road conditions including incidents. In this way cars can be used as information sources for high-quality information services.
RSUs are connected to the traffic control centre for management and control purposes. They broadcast e.g. traffic light information (RSU → vehicle) and traffic information from the traffic-control centre (TCC) via the RSU to the vehicles (TCC → RSU → vehicle); and collect vehicle probe data for the traffic control centre (vehicle → RSU → TCC). For reliable distribution of data, low-latency and high-capacity connections between RSUs (e.g. traffic lights, traffic signs, etc.) and the TCC is required. This type of application comes with rather tight end-to-end latency requirements for the communication service between RSU and TCC (10 ms) since relayed data needs to be processed in the TCC and, if needed, the results need to be forwarded to neighbouring RSUs. Also, the availability of the communication service has to be very high (99,9999%) in order to compete with existing wired technology and in order to justify the costly deployment and maintenance of RSUs. Furthermore, due to considerably large aggregation areas (see Annex D.5.1), considerable amounts of data need to be backhauled to the TCC (up to 10 Mbit/s per RSU).
Up
D.5.1  Service area and connection density
It is relatively hard to provide estimates for the service area dimension. One reason is that it depends on the placement of the base station relative to the RSUs. Also, the RSUs can, in principle, act as relay nodes for each other. The service area dimension stated in Table 7.2.3.2-1 indicates the size of the typical data collection area of an RSU (2 km along a road), from which the minimum spacing of RSUs can be inferred. The connection density can be quite high in case data is relayed between RSUs, i.e. along the road (1000 km-2).
Up
EVoid

Up   Top   ToC