Tech-invite3GPPspaceIETF RFCsSIP
Quick21222324252627282931323334353637384‑5x

Content for  TR 22.847  Word version:  18.2.0

Top   Top   Up   Prev   Next
1…   5…   5.2…   5.3…   5.4…   5.5…   5.6…   5.7…   5.8…   6…

 

5.4  Support of skillset sharing for cooperative perception and manoeuvring of robotsWord‑p. 17

5.4.1  DescriptionWord‑p. 17

Today, most automated driving vehicles rely on a single controller, which is the vehicle itself: sensing and controlling features of its own [3]. Since 3GPP Rel-14, LTE-based support for V2V features have been developed and tested through collaborative participation from automotive and communication industry. However, it is still challenging to use automated driving functionalities in general unstructured settings, if the controlling features are based on a single controller, having no idea on how neighbouring vehicles will behave.
This consequently requires that the automated driving system should allocate an extra safety margin into the planned trajectory which in turn causes traffic flow to be reduced and causes inefficiency to happen in a large scale network of vehicle networks where non-V2X vehicles and V2X-enabled vehicles possibly coexist. This is not even a problem for such an automated driving of road vehicles - the same applies to the operations of "automated manoeuvring robots" in unstructured settings. Without cooperation, the field of perception of a vehicle/robots is limited to the local coverage of the onboard sensors - not only for the relative distance, relative angle.
As a technology enabler solution against such problems of guaranteeing safety and traffic efficiency, it is being studied to share the sensor information [9] and manoeuver sharing [8] in SAE. Tactile Internet for V2N (potentially with assistance from edge cloud instead of general cloud servers) or V2V can enable an ultra-fast and reliable exchange of highly detailed sensor data sets between nearby vehicles, along with haptic information on trajectory [3]. Also, it would be one of the key factors for so-called "cooperative perception and manoeuvring" functionalities [10]: planning cooperative manoeuvers among multiple automated driving vehicle (or robots), such as plan creation, target point generation and target point risk assessment. It is by the Tactile Internet connectivity that vehicles can perform a cooperative perception of the driving environment based on fast fusion of high definition local and remote maps collected by the onboard sensors of the surrounding vehicles (e.g., video streaming from camera, radar, or lidar). This allows to augment the sensing range of each vehicle and to extend the time horizon for situation prediction, with huge benefits for safety [3]. The onboard sensors in today automated driving vehicles generate data flows up to 8 Gbit/s [3]. All these requirements call for new network architectures interconnecting vehicles and infrastructure utilizing ultralow-latency networks based on the Tactile Internet for cooperative driving services [3].
This use case is related to the support of (1) cooperative perception and manoeuvring and (2) extension of sensing range for cooperative automated driving scenarios using Tactile Internet, with some examples of moving robots (e.g., local delivery robots). Manoeuvring and perception obtained via haptic and multi-modal communications (also known as skillset sharing) are very timely shared between the controller and controlee.
Up

5.4.2  Pre-conditionsWord‑p. 18

Four robots S1, S2, C1 and C2 are working on delivery tasks from a geographic point to another, respectively.
Robots UEs S1 and S2 are automated driving robots with a standalone steer/control, manoeuvring in a crowded village.
Robots UEs C1 and C2 are automated driving robots with steer/control and manoeuver/skillset sharing functionalities, manoeuvring in another crowded village.
The roads that robots are using in these villages are not structured road (i.e., no lane separator, no lane marks, etc.) and they are under the same conditions for robots to move.
Up

5.4.3  Service FlowsWord‑p. 18

  1. S1 is moving at speed of X and is getting close to S2.
    1. S1 does not know exactly what trajectory S2 is planning to move along. Therefore, S1 reduces the current moving speed to (0.5 * X).
    2. During the operation at this reduced speed, S1 still does not know the detailed trajectory of S2. Therefore, S1 reserves a distance of Y1 meters relative to S2 and gets ready to further reduce the speed or to make a full stop, monitoring the movement of S2.
    3. S1 and S2 pass each other and continue their trip to the destinations, respectively.
  2. C1 is moving at speed of X and is getting close to C2.
    1. C1 knows exactly what trajectory C2 is planning to move along as they share the manoeuvers. Therefore, C1 reduces the current moving speed to (0.9 * X).
    2. During the operation at this reduced speed, C1 still knows the detailed trajectory of C2 through manoeuver sharing. Therefore, C1 reserves a distance of Y2 (Y2 << Y1) meters relative to C2 and gets ready to further reduce the speed to avoid any change of collision but there is very little chance that both should make a full stop as both are sharing the steer/control: one can yield the space for the other only when necessary.
    3. C1 and C2 pass each other and continue their trip to the destinations, respectively.
Up

5.4.4  Post-conditionsWord‑p. 19

The total time that S1 and S2 should spend is much greater than the total time that C1 and C2 should spend.
The total energy consumption (e.g., to accelerate from a low speed level to X) for S1 and S2 is greater than that for C1 and C2.
Figure 5.4.4-1 provides a simplified explanation on the behaviours of speed changes for examples without (left section) and with (right section) real-time multi-modal communication for interactive haptic control and feedback (skillset sharing).
Copy of original 3GPP image for 3GPP TS 22.847, Fig. 5.4.4-1: Simplified examples on the stochastic behaviours of the speed change (and the minimum margin to set between robots (or vehicle-styled robots)) without (left section) and with (right section) real-time multi-modal communication for interactive haptic control and feedback (skillset sharing). The road is assumed to be in general unstructured setting, e.g., no lane separator or marks.
Up

5.4.5  Existing features partly or fully covering the use case functionalityWord‑p. 19

V2X performance requirements found in TS 22.185, TS 22.186. (e)CAV requirements in TS 22.104. VIAPA requirements in TS 22.263

5.4.6  Potential New Requirements needed to support the use caseWord‑p. 19

[PR 5.4.6-1]
5G system shall be able to support real-time multi-modal communication for interactive haptic control and feedback with KPIs as summarized in Table 5.4.6-1.
Use Cases Characteristic parameter (KPI) Influence quantity Remarks (NOTE 1)
Max allowed end-to-end latency (NOTE 2) Service bit rate: user-experienced data rate Reliability Message size (byte) # of UEs UE Speed Service Area
Skillset sharing low- dynamic robotics (including teleoperation) Controller to controlee5-10ms0.8 - 200 kbit/s (with compression)[99,999%]n DoFs: (2n)-(8n)
(n=1,3,6)
-Stationary or Pedestrian100 km²Haptic (position, velocity)
Skillset sharing low- dynamic robotics (including teleoperation)
Controlee to controller
5-10ms0.8 - 200 kbit/s (with compression)[99,999%]n DoFs: (2n)-(8n)
(n=1,10,100)
-Stationary or Pedestrian100 km²Haptic feedback
10ms1-100 Mbit/s[99,999%]1500-Stationary or Pedestrian100 km²Video
10ms5-512 kbit/s[99,9%]50-Stationary or Pedestrian100 km²Audio
Highly dynamic/ mobile robotics
Controller to controlee
1-5ms16 kbit/s -2 Mbit/s (without haptic compression encoding);
0.8 - 200 kbit/s (with haptic compression encoding)
[99,999%] (with compression)
[99,9%] (w/o compression)
n DoFs: (2n)-(8n)
(n=1,3,6)
-high-dynamicTBDHaptic (position, velocity)
Highly dynamic/ mobile robotics Controlee to controller[1-5ms]0.8 - 200 kbit/s[99,999%] (with compression)
[99,9%] (w/o compression)
n DoFs: (2n)-(8n)
(n=1,10,100)
-high-dynamicTBDHaptic feedback
1-10ms1-10 Mbit/s[99,999%]2000-4000-high-dynamic4 km²Video
1-10ms100-500 kbit/s[99,9%]100-high-dynamic4 km²Audio
NOTE 1:
Haptic feedback is typically haptic signal, such as force level, torque level, vibration and texture.
NOTE 2:
The latency requirements are expected to be satisfied even when multi-modal communication for skillset sharing is via indirect network connection (i.e., relayed by one UE to network relay).
Up

Up   Top   ToC