Content for  TR 26.998  Word version:  18.0.0

Top   Top   Up   Prev   Next
0…   4…   4.2…   4.2.2…   4.2.3…   4.3…   4.4…   4.5…   4.6…   4.6.4…   4.6.5…   4.6.8…   5   6…   6.2…   6.2.4…   6.2.5…   6.3…   6.3.4…   6.3.5…   6.4…   6.4.4   6.4.5…   6.5…   6.5.4   6.5.5   6.5.6…   6.6…   6.6.4   6.6.5…   7…   8…   8.9   9   A…   A.2   A.3…   A.4   A.5   A.6   A.7…


4.6.8  Web Real-Time Communication (WebRTC)p. 51  WebRTC as an OTT applicationp. 51

The Web Real-Time Communication (WebRTC) is an API and set of protocols that enable real-time communication. The WebRTC protocols enable any two WebRTC agents to negotiate and setup a bidirectional and secure real-time communication channel. The WebRTC API exposes a JavaScript-based API to enable the development of applications that make use of the user's existing multimedia capabilities to establish real-time communication sessions. However, access to the WebRTC set of protocols is possible through other programming languages.
The WebRTC protocols are developed and maintained by the rtcweb group in IETF. The WebRTC API is developed by W3C.
The WebRTC API is decomposed into three layers:
  • API for web developers that consists mainly of the MediaStream, RTCPeerConnection, and RTCDataChannel objects.
  • API for browser and user agent implementers and providers
  • Overridable API for audio/video capture and rendering and for network input/output, which the browser implementers may hook their own implementations to.
The main WebRTC stack components are the voice engine, the video engine, and the transport component.
The transport component ensures a secure transport channel for both parties of the call to communicate. It relies on an RTP protocol stack that runs over DTLS and leverages the SRTP profile.
The following diagram depicts the WebRTC protocol stack:
Copy of original 3GPP image for 3GPP TS 26.998, Fig. 4.6.8-1: WebRTC protocol stack
Figure 4.6.8-1: WebRTC protocol stack
(⇒ copy of original 3GPP image)
WebRTC delegates the signalling exchange to the application. The signalling protocol and format may be chosen by the application freely. However, the offer and answer are generated in the SDP format. The ICE candidates may be provided as strings or in JSON format.
WebRTC needs negotiation for the following purposes:
  • Negotiation of the media streams and formats: this relies on the SDP offer/answer mechanism to generate and validate media streams and parameters.
  • Negotiation of the transport parameters: this relies on ICE to identify and test ICE candidates. Whenever a higher priority ICE candidate is validated, the connection will switch to it.
The following call flow shows an example of the ICE negotiation process:
Copy of original 3GPP image for 3GPP TS 26.998, Fig. 4.6.8-2: WebRTC ICE negotiation process
Figure 4.6.8-2: WebRTC ICE negotiation process
(⇒ copy of original 3GPP image)
Due to the separation of the negotiation of the transport parameters from the media parameters, appropriate QoS negotiation needs to consider consecutive and asynchronous changes to the connection parameters. In case of a relay server, such as a TURN server, is deployed, the QoS negotiation is to be extended to appropriately cover the outbound streams as well.  WebRTC framework for RTC Media Service Enablersp. 52

A subset of WebRTC, limited to a protocol stack and implementation excluding codecs and other media processing functions defined in W3C and/or IETF, is considered in clauses 6.5 and 8.3 to define an instantiation of AR conversational services.

4.6.9  Joint workshop with Khronos and MPEG on "Streamed Media in Immersive Scene Descriptions"p. 52

3GPP also participated in a joint workshop with Khronos and MPEG on the topic of "Streamed Media in Immersive Scene Descriptions" in September 2021 in order to identify common and complementary aspects on defining networked media. All presentations are provided in [53] [54]. The workshop attracted more than 200 participants. A survey was conducted among the participants and there was broad positive feedback on the event with a request to a follow-up in 2022. Details are available in [54]. An initial summary of main observations is provided as follows:
  • Complementary work - many touch points - collaboration seems to be beneficial
  • Specific topics identified, but may be digested further
    • glTF and extensions by MPEG-I Scene description
    • Tools and implementation support
    • Vulkan video and VDI
    • Extended Realities: OpenXR, MPEG-I Phase 2 including AR, Interactivity, and Haptics
    • Systems and Split Rendering: OpenXR, 3GPP connectivity, MPEG codecs
  • Challenges: timelines, publication rules, IPR policies, membership
  • Opportunities: complementary expertise, implementation and developer support, joint promotion, focus
  • Proposed next steps:
    • continue the discussion
    • set up some kind of discussion platform

Up   Top   ToC