Tech-invite3GPPspaceIETF RFCsSIP
Quick21222324252627282931323334353637384‑5x

Content for  TR 22.874  Word version:  18.2.0

Top   Top   Up   Prev   Next
0…   4   5…   5.2…   5.3…   5.4…   5.5…   6…   6.2…   6.3…   6.4…   6.5…   6.6…   6.7…   7…   7.2…   7.3…   7.4…   8…   A…   A.2   A.3   A.4   B   C   D…

 

6.6  Shared AI/ML model monitoringWord‑p. 41

6.6.1  DescriptionWord‑p. 41

AI/ML models are trained on a training data set to accomplish an established task. The tasks may vary from image or speech recognition to forecasting to optimize, as example, handover performance (see TR 28.809) or tuning Core Network assisted parameters (see clause 5.4.6.2 of TS 23.501).
In each of these tasks, the provider of the shared AI/ML model may benefit from sharing a trained AI/ML model with the consumer(s) of the shared AI/ML model or may benefit from a distributed/federated AI/ML model training or a split AI/ML model training over the 5G system. Furthermore, AI/ML model monitoring is a requirement to enable online learning in the network (e.g. via Reinforcement Learning), a set of techniques more suited to promptly react to service degradation.
Shared AI/ML model:
AI/ML model that is shared among different applications, e.g., the AI/ML model is pre-trained and provisioned to different consumers, or the AI/ML model is trained using distributed/federated learning approach or by splitting the model training phase to different parts executed in different network locations.
Shared AI/ML model provider:
application server that is providing or managing a "shared AI/ML model".
Shared AI/ML model consumer:
application, e.g. running on the UE, that is using/consuming a "shared AI/ML model".
Due to changes in the scenario (i.e., in the context from which training data are collected), an AI/ML model may provide poor performances compared with the performance of the AI/ML model measured during the model testing phase. This can happen when over time the distribution of the input data for inference differs from the distribution of the training data, or if the AI/ML model is utilized in a different context. In this case, the shared AI/ML model provider should be able to promptly detect the performance degradation and react in order to avoid service degradation or disruptions. Quite often, update of a shared AI/ML model is not solely dependent on inference results of one shared AI/ML model consumer as performance degradation could be due to some other error source, e.g., model input measurement errors. To detect outdated shared AI/ML model, the shared AI/ML model provider can make use of inference results from multiple shared AI/ML model consumers and a spatial and temporal analysis is performed before triggering shared AI/ML model update.
Therefore, the shared AI/ML model provider, once sharing an AI/ML model over 5G System with a shared AI/ML model consumer, needs to keep track of the model performances to detect possible performance degradation of the shared AI/ML model (e.g. based on inference feedback from AI/ML model consumer such as a lower confidence level).
Alternatively, the shared AI/ ML model provider can split the AI/ ML model training with a shared AI/ML model consumer to constantly improve the performance of the shared AI/ ML model based on a local training and/or inference feedback from the shared AI/ML model consumer. The local part of the shared AI/ML model will be trained/fine-tuned under the shared AI/ML model provider guidance. The input to this training will be data available at the shared AI/ML model consumer. The output of local model training or an inference at shared AI/ML model consumer can be provided to the shared AI/ML model provider and be used by the shared AI/ML model provider to provide further information for the 5G System to improve its operations.
Up

6.6.2  Pre-conditionsWord‑p. 41

The shared AI/ML model provider stores multiple AI/ML models along with their performances measured during the test phase.
The shared AI/ML model provider is capable of sharing AI/ML models with shared AI/ML model consumers leveraging on 5GS.
The AI/ ML model provider is capable of splitting and/or distributing the AI/ML model training with/to a shared AI/ ML model consumer leveraging on 5GS.
The shared AI/ML model consumer may run applications requiring the usage of AI/ML models and download them from the AI/ML model provider via the 5GS.
Up

6.6.3  Service FlowsWord‑p. 42

  1. The shared AI/ML model provider wants to optimize performance of some process by means of shared AI/ML model performance.
  2. The shared AI/ML model provider sends the trained shared AI/ML model to the shared AI/ML model consumer at the UE leveraging on the 5GS.
  3. The UE receives the model and employs the model to perform local training and inference using data available on the UE.
  4. The shared AI/ML model provider monitors the context scenario (e.g. the UE data which is available to the application) in which the UE is running the shared AI/ML model and the model performance.
  5. If a change in the context scenario or model performance is detected, e.g. based on an inference feedback from the shared AI/ML model consumer, the shared AI/ML model provider, in order to avoid model performance degradation, shares with the AI/ML model consumer an updated version of the shared AI/ML model retrained to capture the new context with expected better performance.
  6. The UE continues to run the updated model without experiencing performance degradation.
Up

6.6.4  Post-conditionsWord‑p. 42

Following the example in the use case, the shared AI/ML model provider receives an accurate forecast regarding AI/ML model performance, and the AI/ML model consumer is using an AI/ML model with high performance.

6.6.5  Existing features partly or fully covering the use case functionalityWord‑p. 42

Void

6.6.6  Potential New Requirements needed to support the use caseWord‑p. 42

[P.R.6.6-001]
The 5GS shall be able to transfer an updated AI/ML model from the shared AI/ML model provider to the shared AI/ML model consumer within [1s-1min] latency for AI/ML models of a maximal size of [100-500] MB.

Up   Top   ToC