Support of Immersive Teleconferencing and Telepresence for Remote Terminals
ITT4RT
S4
SP-180985
Ozgur Oyman, Intel
770024
EVS Codec Extension for Immersive Voice and Audio Services
IVAS_Codec
S4
SP-170611
Bin Wang, Huawei Technologies Co Ltd
Summary based on the input provided by Nokia Corporation in SP-220275.
This Work Item extends the functionality of Multimedia Telephony Service for IMS (MTSI) in TS 26.114 by adding the Virtual Reality (VR) unidirectional video transmission capability.
Earlier TS 26.114 was capable of handling real-time multimedia communications of traditional media (e.g., audio and video). The completed WI on ITT4RT enables, on top of the mentioned specification, new VR use cases and allows transmitting and receiving, in addition to traditional media, also unidirectional 360-degree video that can be viewed using Head Mounted Displays and 5G devices. This makes the end-user experience more compelling and immersive.
In addition, two more documents have been produced as part of this WI: TR 26.962 (Immersive Teleconferencing and Telepresence for Remote Terminals (ITT4RT) Operation and Usage Guidelines) and TR 26.862 (Immersive Teleconferencing and Telepresence for Remote Terminals (ITT4RT) Use Cases, Requirements and Potential Solutions).
The ITT4RT WI adds to TS 26.114 the following:
Support of still images, image sequences and still 360-degree background
Support of 360-degree video for H.265
Support of overlays on top of 360-degree video
Support of multiple video projection formats
Support for fisheye video
Support of camera calibration for Network-based Stitching
Support of picture packing for 360-degree video
Support of viewport dependent processing
Support of improved feedback for 360-degree video
Support of captured content replacement for screen sharing
Recommended audio mixing gains
Examples SDP offers and answers for 360-degree video.
Summary based on the input provided by Qualcomm in SP-220300
Since the initial development and last update of TV Video Profiles defined in TS 26.116, TV and mobile device capabilities have improved and nowadays they support higher decoding capabilities. In particular, new TV sets and 5G mobile devices entering the market since 2020 support up to 8K video decoding as well as 8K display capabilities.
8K is recently trialled and introduced in several services. In addition, other ecosystem support is happening, such as 8K encoders are announced, 8K TV sets are shipped and content is produced in 8K. Furthermore, it is evident that distribution of 8K TV content is feasible with 5G. In order to provide full interoperability for 8K TV services in the context of 5G, this work item specifies an HEVC-based 8K TV operation point in TS 26.116 as well as the corresponding media decoding capabilities for 5GMS in TS 26.511 in order to enable support for up to 8K video.
More specifically, this work item completed the following work:
Defined new 8K TV operation point(s) for TV Video profiles with conforming bitstream requirement based on H.265/HEVC Main-10 Profile Main Tier Profile in TS 26.116.
Defined the relevant ISO BMFF encapsulation, CMAF media profile and DASH signalling for the new 8K TV operation point in TS 26.116.
Included the newly defined decoding capabilities and associated profiles and operation points into 5G Media Streaming for TV Services in TS 26.511.
Documented typical traffic characteristics of 8K TV video services in TR 26.925.
The work was carried out in close collaboration with MPEG CMAF and DVB to align the media profiles.
Summary based on the input provided by Qualcomm in SP-220637.
The Technical Report provides a full characterization framework for video codecs in the context of 5G services. This framework permits the evaluation of the performance of existing 3GPP codecs, and also permits the identification of potential benefits of new codecs.
The framework fulfils the following aspects:
A comprehensive set of scenarios relevant to 3GPP services is described in clause 6. For each scenario the anchors for existing 3GPP codecs (H.264/AVC and H.265/HEVC), the version of the reference software for the anchors, and their associated configurations are defined.
A set of reference sequences is identified per scenario and each sequence is described in more details in Annex C.
For each scenario, one or more performance metrics are defined. Each metric is described in more details in clause 5.5.
The overall characterization framework process is defined in clause 5 and in Annex B, D, E, F, and G.
New codecs, namely H.266/VVC, MPEG-5 EVC and AOMedia AV1 are identified in clause 8. For each scenario, a version of their respective reference software is identified and configurations as close as possible to the anchor configurations are defined.
For all codecs, metrics are computed and documented as part of the Technical Report. The report only documents objective metrics.
According to Figure 15.3-1, all of those metrics are used in order to characterize test codecs against anchor codecs using the Bjöntegard-Delta (BD)-Rate gain expressing the bitrate savings in percentage of the new codec against the existing one.
The TR is supported by a huge set of data that is available here: https://dash-large-files.akamaized.net/WAVE/3GPP/5GVideo/ including raw video sequences, anchor and test bitstreams, results, png plots and annotation, etc as well as a fully functional set of scripts that allow to replicate the setup and results.
This is the first time that 3GPP has done such an extensive baseline work for video codec evaluation and characterization. The study item was backed and supported by 23 3GPP members. While the framework is comprehensive, it was also identified that it clearly has some limitations; for example, encoder configurations for each scenario may have not been stringent enough in their definition, leading to results that may not be fully comparable. Furthermore, the encoders used for the evaluation of the various codecs have different maturity and features. Results in this document should always be considered with a clear understanding of the characterization conditions and these results were derived. The framework does not include subjective evaluation; it is purely based on objective metrics.
One important outcome of the work documented in this Technical Report is the characterization and evaluation of H.265/HEVC against relevant scenarios and its characterization against H.264/AVC. Also, a first understanding of H.265/HEVC performances versus new codecs was developed. From the scenarios and results in this Technical Report it is observed that:
H.265/HEVC does not show any functional deficiencies or gaps, nor does it lack any relevant features.
In terms of compression efficiency, H.265/HEVC, evaluated based on the HM, performs sufficiently well for all the scenarios in this technical report.
Providing consistent HEVC-based interoperability in 3GPP services, for traditional and new scenarios, is definitely beneficial. It is recommended that 3GPP consider upgrading specifications to support profiles, levels, and possibly features available in HEVC. Features may include better support for screen content and computer-generated content, XR/AR type of services, as well as low and very low latency services.
The framework and the initial results for new codecs demonstrate coding performance improvements over H.265/HEVC for some codecs of up to 50%. However, the initial results are not considered mature enough to support concrete recommendations on adding new codecs. The potential addition of any new codec in 3GPP services and specifications requires diligent preparation, including the identification of needs and requirements for different scenarios, as well as a complete characterization against existing codecs. The information in this TR, as well as any new developments in 3GPP with respect to codecs in latest specifications, could serve as a baseline for future work. Such an effort may lead to conclusions on the potential addition of any new codec in 3GPP services and specifications. However, no immediate need has been identified to initiate such follow-up work.
Summary based on the input provided by Qualcomm in SP-220626
This work item improves the acoustic test methods in TS 26.132 by providing proper guidance on how to setup a UE featuring a non-traditional earpiece.
The acoustic performance of UEs is evaluated by tests defined in [1]. The tests were originally developed for handsets featuring a traditional earpiece, i.e., one in which sound radiates through an acoustic port outlet directed at the user's ear canal. Recently, UEs have come to market featuring other means of radiating sound to the user, e.g., through vibrating displays, necessitating an update of 3GPP test specifications.
The HaNTE work item developed new test methods and assessed those methods through round-robin testing and listening experiments. Ultimately, the test methods in [1] were improved to specify how to mount a HaNTE UE for testing.
Summary based on the input provided by HEAD acoustics GmbH and Orange in SP-211417.
This work item extends the audio test specifications in TS 26.131 and TS 26.132 to analogue (wired) and digital (wired and wireless) electrical interfaces, which were so far not considered. The introduced test methods and requirements ensure proper interoperability (from an audio/acoustic point of view) between the interface and headsets.
The acoustic performance of UEs is evaluated by terminal tests defined in the test suite in TS 26.131 (requirements) / 26.132 (test methods). It is relevant to extend these tests to also use the electrical interface (e.g., audio jack, Bluetooth or USB-C), as today's market users can purchase compatible headsets or other products that use standardized connections with mobile phones.
The changes to these specifications introduced by the work item considered the following aspects:
Test setup for analogue and digital electrical interface was introduced, based on related work in Recommendation ITU-T P.381 [3] and P.383 [4].
Test methods, performance requirements and objectives were determined in a unified and highly comparable way for analogue and digital electrical interfaces.
Test methods, performance requirements and objectives were derived from existing ones for handset/headset UE, as well as from related work in Recommendation ITU-T P.381 [3] and P.383 [4].
Performance requirements and objectives as well as the applicability of the new test methods were validated in measurement series.
Recommendation ITU-T P.381 (10/20): "Technical requirements and test methods for the universal wired headset or headphone interface of digital mobile terminals".
Recommendation ITU-T P.383 (06/21): "Technical requirements and test methods for multi-microphone wired headset or headphone interfaces of digital wireless terminals".