Content for  TR 22.847  Word version:  18.2.0

Top   Top   Up   Prev   Next
1…   5…   5.2…   5.3…   5.4…   5.5…   5.6…   5.7…   5.8…   6…


5.6  Live Event Selective Immersionp. 24

5.6.1  Descriptionp. 24

To provide immersive experience of a live football game to the audience not at the scene, multiple AI cameras are deployed in the stadium collecting data of the video and audio from different angles for the generation of footages at the Application Server. The footages are provided to the audience for selection.
The AI camera predictively follows the live action, depending on the objects to follow e.g. ball or player. The Application Server predicts the potential actions of the object based on the data collected from the cameras and instructs the camera on what object to follow and how to follow. The AI camera acts based on the instructor from the Application Server.
This use case shows an example of multi-modal service as defined in clause 4.1 and clause 4.2, in which the application server requires inputs from multiple UEs to generate the outputs. The service flows and potential new requirements of this use case also apply to other multi-modal interactive system use case scenarios.

5.6.2  Pre-conditionsp. 24

Copy of original 3GPP image for 3GPP TS 22.847, Fig. 5.6.2-1: Live Event Selective Immersion
Figure 5.6.2-1: Live Event Selective Immersion
(⇒ copy of original 3GPP image)
Each AI camera interacts with the Application Server over 5GS network as a UE.
Alice is watching the football game, and the footages for selection are transmitted to her UE over 5GS network.
Footages and motion prediction are generated at the Application Server based on the video and audio data collected by the AI camera.
Each AI camera has its own responsibilities:
  • Camera#1: data collection for motion prediction and footage generation, the primary camera placed in a location owning the best view
  • Camera#2: data collection for motion prediction, placed in the best location for capturing the motion
  • Camera#3: data collection for motion prediction and footage generation
  • Camera#4: data collection for footage generation
  • Camera#5: data collection for motion prediction

5.6.3  Service Flowsp. 25

  1. The football game starts. Camera#1, Camera#2, Camera#3, Camera#4 and Camera#5, as the UEs, are switched on and registered to the 5GS network to collect video and audio data to conduct Live Event Selective Immersion service.
  2. The Application Server informs 5GS that UEs corresponding to Camera#1, Camera#2, Camera#3, Camera#4 and Camera#5 are subject to the service application, and provides QoS requirements of these UEs and the coordination policies for this multi-modal service for assistance from 5GS.
  3. The QoS requirements and the coordination policy are applied at 5GS. Camera#1, Camera#2, Camera#3, Camera#4 and Camera#5 transmit the collected data over 5GS to the Application Server at the target QoS.
  4. The Application Server makes motion prediction based on data received from Camera#1, Camera#2, Camera#3 and Camera#5, and generates footages based on data received from Camera#1, Camera#3 and Camera#4.
  5. The Application Server transmits motion prediction of the object(s) over 5GS to Camera#1, Camera#2, Camera#3 and Camera#5 and transmits footages over 5GS to Alice's UE.
  6. Some people are gathering at the gate of the stadium for the celebration of winning scores, then network congestion happens from time to time. Based on the coordination policy of the multi-modal service, when the target QoS of Camera#1 can't be guaranteed, 5GS reduces QoS of Camera#4 to make sure QoS of Camera#1 is guaranteed; when the congestion is relieved, 5GS increases QoS of Camera#4 while target QoS of Camera#1 is still guaranteed.
  7. The network congestion gets more serious, and target QoS of Camera#2 can't be guaranteed. Since the motion data collected by Camera#2 is mandatory for motion prediction without which the motion prediction can't be made, 5GS releases resources of Camera#2 and Camera#5 based on the coordination policy of the multi-modal service.

5.6.4  Post-conditionsp. 25

Alice receives multiple footages and selects one of them to enjoy the immersive experience.

5.6.5  Existing features partly or fully covering the use case functionalityp. 25

Clause 6.7.2 and clause 6.8 of TS 22.261 defined the following policy control requirements that can be reused to this use case:
  • The 5G system shall be able to provide the required QoS (e.g. reliability, end-to-end latency, and bandwidth) for a service and support prioritization of resources when necessary for that service.
  • The 5G system shall be able to support QoS for applications in a Service Hosting Environment.
  • The 5G system shall support the creation and enforcement of prioritisation policy for users and traffic, during connection setup and when connected.
  • Based on operator policy, the 5G system shall support a real-time, dynamic, secure and efficient means for authorized entities (e.g. users, context aware network functionality) to modify the QoS and policy framework. Such modifications may have a variable duration.
Clause 6.23.2 of TS 22.261 defines the following requirements enabling 5GS to notify an authorized entity of the communication events:
  • The 5G system shall be able to provide notification of communication events to authorized entities per pre-defined patterns (e.g. every time the bandwidth drops below a pre-defined threshold for QoS parameters the authorized entity is notified, and the event is logged).

5.6.6  Potential New Requirements needed to support the use casep. 26

[PR 5.6.6-1]
The 5G system shall support a mechanism to allow an authorized 3rd party to provide QoS policy for flows of multiple UEs associated with an application. The policy may contain e.g. the expected 5GS handling and the associated triggering event.
[PR 5.6.6-2]
The 5G system shall support a mechanism to apply QoS policy for flows of multiple UEs associated with an application received from an authorized 3rd party.

Up   Top   ToC