| |
| Figure 4.2.2-1 | Graphical representation of sign language translation in real-time communication |
| Figure 4.2.3-1 | Example of DNN-based Down/Up-scaler |
| Figure 4.2.3-2 | Neural network based post-processing for video coding use-case |
| Figure 4.2.4-1 | Crowdsourced NeRF creation |
| Figure 4.2.4-2 | NeRF and synthesized view distribution |
| Figure 4.2.5-1 | workflow for NLP on speech |
| Figure 4.3.3-1 | FCM framework |
| Figure 4.4.1-1 | High level encoder diagram for JPEG-AI |
| Figure 4.4.1-2 | Encoder and decoder diagram of JPEG-AI |
| Figure 4.4.1-3 | Evaluation of JPEG-AI for parallel image reconstruction and network computer vision task from a single entropy decoded latent representation |
| Figure 5.1.1-1 | AI/ML model composition examples with a fully connected ANN |
| Figure 5.1.1-2 | General AI/ML model composition examples |
| Figure 5.1.2-1 | Split AI/ML model inference where the UE is the media data source with first inference endpoint on the UE |
| Figure 5.1.2-2 | Split AI/ML model inference where the UE is the media data source with first inference endpoint on the network |
| Figure 5.1.2-3 | Split AI/ML Model inference where the network is the media source |
| Figure 5.2.2-1 | Basic architecture for AI/ML model delivery with inference in the UE |
| Figure 5.2.2-3 | Basic workflow for AI/ML model delivery with inference in the UE |
| Figure 5.2.2-4 | Basic workflow for adaptive model delivery update |
| Figure 5.2.3-1 | Basic architecture for split inference between the network and UE, with media data source in the network or from the UE via the network |
| Figure 5.2.3-2 | Basic architecture for split inference between the UE and network, with media data source in the UE |
| Figure 5.2.3-3 | Basic workflow for split inference between the network and UE |
| Figure 5.2.4-1 | Basic architecture for distributed/federated learning between the network and multiple UEs |
| Figure 5.2.4-2 | Basic workflow for distributed/federated learning between a UE and the network |
| Table 5.3-1 | Logical AI/ML functions |
| Figure 5.3.4-1 | AI/ML data delivery general architecture |
| Figure 5.3.5-1 | Procedures for split AI/ML operation |
| Figure 5.3.6-1 | Procedure for AI/ML model distribution and operation |
| Figure 5.3.7-1 | Procedure for distributed/federated learning |
| Figure 5.4.1-1 | AI/ML data delivery over IMS architecture |
| Figure 5.4.2-1 | Procedures for AI/ML model distribution |
| Figure 5.4.3-1 | Procedures for split AI/ML operation |
| Figure 5.5.1-1 | Architecture extensions to IMS to support data channels |
| Figure 5.5.2-1 | |
| Figure 6.2.4-1 | Main classes of AI/ML models |
| Figure 6.2.5-1 | Generation of a neural network representation (NNR) bitstream consisting of NNR units |
| Table 6.3.4-1 | Approaches and characteristics considered by MPEG FCM |
| Figure 6.4.1-1 | Tensorflow computational graph |
| Table 6.6.2-1 | Common AI/ML model information |
| Table 6.6.3-1 | AI/ML model information for split operations |
| Table 6.6.4-1 | Intermediate data information for split AI/ML operations |
| Table 6.6.5-1 | Service requirement information |
| Table 6.6.6-1 | Endpoint capability information |
| Table 6.6.7-1 | Federated learning information |
| Table 6.6.8-1 | Compression information |
| Table 6.6.8-2 | Intermediate data tensors and associated compression profile and characteristics |
| Figure 6.7.1-1 | Concept of the AIMET library |
| Table 6.7.2-1 | Application and verification of NNC in different use cases as reported by MPEG |
| Table 6.7.2-2 | Application of NNC in different federated learning use cases as reported by MPEG |
| Table 6.8-1 | User-plane metadata |
| Table 6.8-2 | User-plane metadata example |
| Table A.4.1-1 | Mapping of functions to each collaboration scenario |
| Figure A.4.2-1 | Derivative AI/ML data delivery architecture for collaboration scenario 1 |
| Figure A.4.3-1 | Derivative AI/ML data delivery architecture for collaboration scenario 2 |
| Figure A.4.4-1 | Derivative AI/ML data delivery architecture for collaboration scenario 3 |