Tech-invite3GPPspaceIETFspace
21222324252627282931323334353637384‑5x

Content for  TR 26.925  Word version:  19.0.0

Top   Top   Up   Prev   Next
0…   4…   5…   6…   7…   8…   A…

 

6  Overview on technological developments for existing and emerging servicesp. 17

6.1  Technology Developmentsp. 17

6.1.1  Overviewp. 17

This clause collects developments in the industry on compression advances, content formats, protocol improvements and other advances that may impact the traffic characteristics documented in clause 4.

6.1.2  Compression Improvementsp. 17

Due to the increasing consumption of video content with higher resolutions, the need for more efficient video compression techniques is growing. The first version of the High Efficiency Video Coding (HEVC) standard [15], jointly developed by the ITU-T VCEG and the ISO MPEG, was finalized in 2013. A wide range of products and services support HEVC [15] for video encoding/decoding, especially for Ultra High Definition (UHD) content, where HEVC [15] can provide around 50% bitrate savings for the same subjective quality as its predecessor H.264/AVC [14].
Both codecs are defined as part of the TV Video Profiles in TS 26.116 and are also the foundation of the VR Video Profiles in TS 26.118.
Work on video compression technologies beyond the capabilities of HEVC [15] are continued by the MPEG/ITU, with the creation of the Joint Video Exploration Team (JVET) on future video coding in October 2015. Many new coding tools have been proposed in the context of JVET, which eventually led to a Call for Proposals on video coding technologies with video compression capabilities beyond HEVC [15]. The reference software used in the exploration phase of JVET, called Joint Exploration Model (JEM), was leveraged as the base for the majority of responses to the call. Results included responses demonstrating compression efficiency gains of around 40 % or more with respect to HEVC [15]. This initiated the work by the Joint Video Experts Team (JVET) on the development of a new video coding standard, to be known as Versatile Video Coding (VVC).
MPEG has started working on a new video coding standard to be known as MPEG-5 Essential Video Coding (EVC) in January 2019. MPEG-5 EVC aims to provide a standardized video coding solution to address business needs in some use cases, such as video streaming, where existing ISO video coding standards have not been as widely adopted as might be expected from their purely technical characteristics. In addition, a main profile adds a small number of additional tools, each of which is individually capable of being either properly deactivated or switched to the corresponding basic tool. The target coding efficiency for the call for proposals was to be at least as efficient as HEVC. This target was exceeded by approximately 24% in the responses to the call for proposals, which were evaluated at this meeting. The development of the MPEG-5 EVC standard is expected to be completed in 2020.
Figure 6.1.2-1 shows the typical improvements of video compression rates over time as well as the target for the VVC standard. It is also observed that compression technologies have enabled the reduction of bitrates by 50 % in a time frame of 7-10 years. Most of the gains come by the increase of encoding and decoding complexity, spurred according to Moore's law.
Copy of original 3GPP image for 3GPP TS 26.925, Fig. 6.1.2-1: Video bitrate efficiency improvements and target for the final VVC standard [reproduced with appropriate permission from Fraunhofer]
Up
Table 6.1.2-1 provides a summary of the expected compression efficiency of different codecs and expectations on target bitrates for different video technologies.
Codec Coding performance
(Random-Access)
Targeted bitrate
(Random Access)
Objective Subjective
HEVC -40 % vs AVC [24] [25] [26] -60 % vs AVC [24] [25] [26]4k:
  • Statmux: 10-13 Mbps
  • CBR: 18-25 Mbps
8k:
  • CBR: 40-56 Mbps
  • High quality: 80-90 Mbps
[24][25][26]
EVC-Baseline -30 % vs AVC [27] [29]n/aNot expected for 4k or 8k
EVC-Main -24 % vs HEVC [27] [29]n/a Expected [27] [29]
  • 4k CBR: 15-19Mbps
  • 8k CBR: 30-60 Mbps
VVC -30 % vs HEVC [28] [29] Best-CfP: -42 % vs HEVC
Target: -50 % vs HEVC
n/a
Expected [28] [29]
  • 4k CBR: 10-15 Mbps
  • 8k CBR: 25-35 Mbps
NOTE:
(Average) with peaks up to 25Mbps thanks to STATMUX.
Also noteworthy is the improvement of encoders over time even for existing standards which also leads to bitrate reductions at the same quality.
Based on this information it can be expected that within the time frame until 2025, video compression technology permit bitrate reductions by a factor of 50 % compared to what is today possible with HEVC [15].
However, not to forget according to Jevons Paradox, stating that the efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource. This may well mean that the compression efficiency gains spur even more traffic.
Additional updates to video chararacteristics and bitrates are available in TR 26.955.
Up

6.1.3  New Media Formatsp. 19

New media formats in media generation and distribution are developed on a continuous basis, taking into account improvements in capturing and display systems. TR 26.949 collects TV distribution formats as they were considered in the time frame of 2015 to 2017. The parameters are summarized in clause 5.4 of the present document in terms of:
  • Spatial Resolution
  • Frame Rates
  • Colorimetry
  • Other parameters
The latest versions, referred to as UHD-1 Phase 2 is summarized according to clause 5.4.8 of TR 26.949:
  • UHD-1 phase 2 exclusively operates with HEVC [15] and bit depth of 10 bit.
  • only square pixel resolutions and only progressive scan are supported.
  • Only BT.2100 [33] non-constant luminance YCbCr is supported.
  • High Dynamic Range is added either through the PQ10 or the HLG10 system, relying on HEVC Main-10 Level 5.1 [15].
  • Optional SEI metadata may be provided in the bitstream using HEVC specified SEI messages [15].
  • Addition of High Frame Rate (HFR), i.e. addition of 100, 120 000/1 001 and 120 Hz.
  • HFR can be supported by two means:
    • Single PID HFR bitstream.
    • Dual PID and temporal scalability when the bitstream is intended to be decodable by UHD phase 1 receivers at half frame rate.
  • For HFR on HEVC Main-10 Level 5.2 [15] is required.
  • Support for Next Generation Audio (NGA) to enable immersive and personalized audio content.
The TV Video Profiles in TS 26.116 address coded representations of the UHD-1 phase 2 signals to the most extent. Table 6.1.3-1 provides an overview of the TV relevant formats considered in the context of 3GPP TV Video Profiles.
Operation Point name Reso­lution format Picture aspect ratio Scan Max. frame rate Chroma format Chroma sub-sampling Bit depth Colour space format Transfer Characte­ristics
H.264/AVC 720p HD1280 × 72016:9Progressive30Y'CbCr4:2:08BT.709BT.709
H.265/HEVC 720p HD1280 × 72016:9Progressive30Y'CbCr4:2:08BT.709BT.709
H.264/AVC Full HD1920 × 108016:9Progressive60Y'CbCr4:2:08BT.709BT.709
H.265/HEVC Full HD 1920 × 108016:9Progressive60Y'CbCr4:2:08; 10BT.709; BT.2020BT.709; BT.2020
H.265/HEVC UHD3840 × 216016:9Progressive60Y'CbCr4:2:010BT.2020BT.2020
H.265/HEVC Full HD HDR1920 x 108016:9Progressive60Y'CbCr4:2:010BT.2020BT.2100 PQ
H.265/HEVC UHD HDR3840 x 216016:9Progressive60Y'CbCr4:2:010BT.2020BT.2100 PQ
H.265/HEVC Full HD HDR HLG1920 x 108016:9Progressive60Y'CbCr4:2:010BT.2020BT.2100 HLG
H.265/HEVC UHD HDR HLG3840 x 216016:9Progressive60Y'CbCr4:2:010BT.2020BT.2100 HLG
Looking further into the future, there is currently only one 8K broadcast service being supported by the Japanese public service broadcaster NHK. This runs 12 hours a day and is to be utilised as part of the promotion for the upcoming Olympics in Japan. This service is supported by a government initiative. A summary of the service launched by NHK was published in [42] and a more detailed description can be found in the IBC paper "Ready for 8K UHDTV broadcasting in Japan" [41].
France Television performed a trial of 8K delivery of the Roland Garros Tennis Championship by 5G in June 2019, details can be found in [43].
For VR production and distribution, the ITU recommends [TBA] that for eye not to perceive pixels for 360VR, 30K by 15K images should be provided, though other criteria may be used in system design.
Up

6.1.4  Protocol Improvementsp. 20

Most of the video traffic used HTTP and TCP for the delivery of both wired and wireless Internet. The transmission control protocol, mostly known as TCP or TCP/IP [17], has been invented over 40 years ago. Over the years, it has been evolving steadily and it became the number transport protocol on the Internet and finally also on mobile networks. Today, around 75% of the traffic is encrypted in mobile data networks. TCP, in combination with TLS, requires three round trips before the actual data can be sent. However, innovation in the context became very hard.
In response to these challenges with slow innovation of TCP, Internet companies started experimenting with proprietary protocols build on top of UDP. UDP is a very basic protocol. It only provides the bare minimum functionality and is suitable for new protocols to be built on top of it. It is well supported by all infrastructure on the Internet. Protocols on top of UDP can be implemented in applications so it allows for rapid deployment of new versions. The experiments with UDP protocols evolved into an effort to bring in the engineering community and openly collaborate on a single protocol framework on top of UDP. An Internet Engineering Task Force working group has been established to specify a protocol called QUIC (Quick UDP Internet Connections). Shortly it is likely to see the first QUIC Internet standard published [18].
The QUIC Protocol is the next generation of transport for the Internet and is on track to become ubiquitous on end-user platforms and within all server-side workflows. Unlike prior standard technologies, which are struggling to innovate due to compatibility issues with existing infrastructure, QUIC operates on top of UDP in the application layer and that brings flexibility to deploy new features in rapid iterations.
Google introduced QUIC back in 2013 as an experimental protocol to reduce TCP connection and transport latencies. QUIC, on the other hand, minimizes the number of set-up round trips by combining UDP transport and its own cryptographic handshake. For connections to the same origin server, QUIC facilitates a zero round trip time.
QUIC also re-implements TCP loss recovery, over UDP and, by using multiplexed connections, it eliminates TCP's head-of-line blocking. This ensures that lost packets do not block any other stream but only those with data in it. Last, but not least, QUIC moves congestion control to the application and the user space, enabling a rapid evolution for the protocol, as opposed to kernel space TCP.
QUIC has been nicely designed to allow being enabled seamlessly for existing video workflows without any changes needed in the video formats or players. Akamai reports as an example that for a measurement is from a soccer event in 2018 a 25% throughput improvement for median viewer was achieved compared to TCP for desktop. QUIC performances gains for mobile compared to TCP/TLS, while still positive, have found not to be that significant as for desktop, in particular for video content.
There are several features of the protocol that contribute to improved video quality, among others:
  • With usual TCP&TLS session which forms HTTPS streaming, there has to be 3 or 4 exchanges of information back and forth between the client and the server, before the video request is made. With QUIC there's none needed, assuming the client communicated with the server previously. The first packet from the client already contains the video request, and the first packet from the server already contains the video content. Or in case the client has not communicated with the server before, it only takes one roundtrip between the client and server. This reduces latency greatly. TCP has similar features available with new version of TLS, TLS-1.3, and TCP Fast Open feature but its deployment is complicated. The 0-RTT (zero round trip time) connection establishment for returning connections applies to about 50% of connections.
  • QUIC also uses more efficient loss recovery - due to more information being provided about the lost packets and timing.
  • QUIC allows Multiplexing without head-of-line blocking, i.e. if one packet gets lost then all requests affected.
  • QUIC also allows connection migration across different access networks.
  • It also allows to measure the performance of each feature and use and apply it selectively for specific deployments.
On mobile QUIC benefits seem to also be tangible. On mobile Android devices, Google claims that QUIC has helped to reduce latency of Google Search responses by 3.6% and YouTube video buffering by 15.3%. It was found that by November 2017, QUIC represented 20% of the total mobile traffic and expected to grow to 35% by end of 2018. According to MVI data, video accounts for approximately 58% of the total mobile internet traffic and video represents 64% of the total QUIC traffic. By November 2018, approximately 90% of internet traffic will be encrypted and QUIC will be 32% of global Internet traffic.
QUIC being an encryption-based protocol such as HTTP/TLS, traffic is typically encrypted over the mobile network delivery, meaning that the use of traditional traffic management tools is limited. On top of what is also observed for HTTP/TLS, QUIC allows multiplexing multiple streams over a single connection, but this comes with the added downside that it is impossible to differentiate between the different streams as the signalling header with the stream identifier is also encrypted.
Another technology and protocol in media distribution is developed under the umbrella of ABR Multicast. With the recent proliferation of live streaming, especially of premium sports, over the Internet, issues in scale and quality have been exposed. The challenge partially lies in scaling the services to millions of simultaneous users, but also in the associated peering and delivery costs as well as the end-to-end latency. Multicast ABR addresses a problem that had already been solved in the context of managed networks for IPTV with IGMP. However, it is quite difficult to replicate IGMP exactly over an unmanaged network. However, in practice it is possible to control delivery of streams quite precisely over a large part of their journey and multicast ABR effectively provides a tunnel through which multiple unicast streams are combined into single ones, just as with traditional multicast. Multiple unicast streams are converted at the entrance of the tunnel into a single stream and then back to multiple unicast as they exit in a process known as transcasting. These processes have since been documented either as a standard or guidelines towards a standard, by both the DVB [19] and by Cable Labs [20]. Products are being built around these guidelines, with many operators currently in proof of concept or field trials of the technology.
Up

6.1.5  Impact on Media Servicesp. 21

The impact of these new developments on Media Service is FFS.

Up   Top   ToC