Tech-invite3GPPspaceIETFspace
21222324252627282931323334353637384‑5x
Top   in Index   Prev   Next

TR 28.908
Study on Artificial Intelligence/Machine Learning (AI/ ML) management

V19.0.0 (PDF)  2025/09  98 p.
V18.1.0  2024/09  98 p.
Rapporteur:
Dr. Al-kanani, Hassan
NEC Europe Ltd

full Table of Contents for  TR 28.908  Word version:  19.0.0

each clause number in 'red' refers to the equivalent title in the Partial Content
Here   Top
1Scope  p. 10
2References  p. 10
3Definitions of terms, symbols and abbreviations  p. 11
3.1Terms  p. 11
3.2Symbols  p. 11
3.3Abbreviations  p. 11
4Concepts and overview  p. 11
4.1Concepts and terminologies  p. 11
4.2Overview  p. 11
4.3AI/ML workflow for 5GS  p. 12
4.3.1AI/ML operational workflow  p. 12
4.3.2AI/ML management capabilities  p. 13
5Use cases, potential requirements and possible solutions  p. 14
5.1Management Capabilities for ML training phase  p. 14
5.1.1Event data for ML training  p. 14
5.1.1.1Description  p. 14
5.1.1.2Use cases  p. 14
5.1.1.2.1Pre-processed event data for ML training  p. 14
5.1.1.3Potential requirements  p. 15
5.1.1.4Possible solutions  p. 15
5.1.1.5Evaluation  p. 17
5.1.2ML model validation  p. 17
5.1.2.1Description  p. 17
5.1.2.2Use cases  p. 17
5.1.2.2.1ML model validation performance reporting  p. 17
5.1.2.3Potential requirements  p. 17
5.1.2.4Possible solutions  p. 17
5.1.2.4.1Validation performance reporting by enhancing the existing IOC  p. 17
5.1.2.5Evaluation  p. 18
5.1.3ML model testing  p. 18
5.1.3.1Description  p. 18
5.1.3.2Use cases  p. 18
5.1.3.2.1Consumer-requested ML model testing  p. 18
5.1.3.2.2Control of ML model testing  p. 19
5.1.3.2.3Multiple ML entities joint testing  p. 19
5.1.3.2.4Model evaluation for ML testing  p. 19
5.1.3.3Potential requirements  p. 19
5.1.3.4Possible solutions  p. 20
5.1.3.4.1NRM based solution  p. 20
5.1.3.5Evaluation  p. 21
5.1.4ML model re-training  p. 22
5.1.4.1Description  p. 22
5.1.4.2Use cases  p. 22
5.1.4.2.1Producer-initiated threshold-based ML model re-training  p. 22
5.1.4.2.2Efficient ML model re-training  p. 22
5.1.4.3Potential requirements  p. 22
5.1.4.4Possible solutions  p. 23
5.1.4.4.1Producer Initiated Retraining  p. 23
5.1.4.4.2Efficient ML model re-training  p. 23
5.1.4.5Evaluation  p. 24
5.1.5ML model joint training  p. 24
5.1.5.1Description  p. 24
5.1.5.2Use cases  p. 24
5.1.5.2.1Support for ML model modularity - joint training of ML entities  p. 24
5.1.5.3Potential requirements  p. 25
5.1.5.4Possible solutions  p. 25
5.1.5.4.1Support for ML model modularity - joint training of ML entities  p. 25
5.1.5.5Evaluation  p. 25
5.1.6Training data effectiveness reporting and analytics  p. 26
5.1.6.1Description  p. 26
5.1.6.2Use cases  p. 26
5.1.6.2.1Training data effectiveness reporting  p. 26
5.1.6.2.2Training data effectiveness analytics  p. 26
5.1.6.2.3Measurement data correlation analytics for ML training  p. 26
5.1.6.3Potential requirements  p. 27
5.1.6.4Possible solutions  p. 27
5.1.6.4.1Possible solution for training data effectiveness reporting  p. 27
5.1.6.4.2 Possible solution for training data effectiveness analytics  p. 28
5.1.6.4.3Possible solution for measurement data correlation analytics  p. 28
5.1.6.5Evaluation  p. 30
5.1.7ML context  p. 30
5.1.7.1Description  p. 30
5.1.7.2Use cases  p. 30
5.1.7.2.1ML context monitoring and reporting  p. 30
5.1.7.2.2Mobility of ML Context  p. 31
5.1.7.2.3Standby mode for ML model  p. 31
5.1.7.3Potential requirements  p. 32
5.1.7.4Possible solutions  p. 32
5.1.7.4.1MLContext <<datatype>> on MLEntity  p. 32
5.1.7.4.2Mobility of MLContext  p. 32
5.1.7.5Evaluation  p. 33
5.1.8ML model capability discovery and mapping  p. 33
5.1.8.1Description  p. 33
5.1.8.2Use cases  p. 34
5.1.8.2.1Identifying capabilities of ML entities  p. 34
5.1.8.2.2Mapping of the capabilities of ML entities  p. 34
5.1.8.3Potential requirements  p. 35
5.1.8.4Possible solutions  p. 35
5.1.8.5Evaluation  p. 36
5.1.9AI/ML update management  p. 36
5.1.9.1Description  p. 36
5.1.9.2Use cases  p. 36
5.1.9.2.1ML entities updating initiated by producer  p. 36
5.1.9.3Potential requirements  p. 37
5.1.9.4Possible solutions  p. 37
5.1.9.5Evaluation  p. 37
5.1.10Performance evaluation for ML training  p. 37
5.1.10.1Description  p. 37
5.1.10.2Use cases  p. 37
5.1.10.2.1Performance indicator selection for ML model training  p. 37
5.1.10.2.2Monitoring and control of AI/ML behavior  p. 37
5.1.10.2.3ML model performance indicators query and selection for ML training/testing  p. 38
5.1.10.2.4ML model performance indicators selection based on MnS consumer policy for ML training/testing  p. 38
5.1.10.3Potential requirements  p. 39
5.1.10.4Possible solutions  p. 39
5.1.10.4.1Possible solutions for performance indicator selection for ML model training  p. 39
5.1.10.4.2Possible solutions for monitoring and control of AI/ML behavior  p. 40
5.1.10.4.3Possible solutions for ML model performance indicators query and selection  p. 40
5.1.10.4.4Possible solutions for policy-based performance indicator selection  p. 41
5.1.10.5Evaluation  p. 41
5.1.11Configuration management for ML training phase  p. 42
5.1.11.1Description  p. 42
5.1.11.2Use cases  p. 42
5.1.11.2.1Control of producer-initiated ML training  p. 42
5.1.11.3Potential requirements  p. 42
5.1.11.4Possible solutions  p. 42
5.1.11.4.1ML training policy configuration  p. 42
5.1.11.4.2ML training activation and deactivation  p. 43
5.1.11.4.2.1General framework for activation and deactivation  p. 43
5.1.11.4.2.2Instant activation and deactivation  p. 43
5.1.11.4.2.3Schedule based activation and deactivation  p. 43
5.1.11.5Evaluation  p. 43
5.1.12ML Knowledge Transfer Learning  p. 44
5.1.12.1Description  p. 44
5.1.12.2Use cases  p. 44
5.1.12.2.1Discovering sharable Knowledge  p. 44
5.1.12.2.2Knowledge sharing and transfer learning  p. 45
5.1.12.3Potential requirements  p. 46
5.1.12.4Possible solutions  p. 47
5.1.12.5Evaluation  p. 48
5.2Management Capabilities for AI/ML inference phase  p. 48
5.2.1AI/ML Inference History  p. 48
5.2.1.1Description  p. 48
5.2.1.2Use cases  p. 48
5.2.1.2.1Tracking AI/ML inference decision and context  p. 48
5.2.1.3Potential requirements  p. 49
5.2.1.4Possible solutions  p. 49
5.2.1.5Evaluation  p. 50
5.2.2Orchestrating AI/ML Inference  p. 50
5.2.2.1Description  p. 50
5.2.2.2Use cases  p. 50
5.2.2.2.1Knowledge sharing on executed actions  p. 50
5.2.2.2.2Knowledge sharing on impacts of executed actions  p. 50
5.2.2.2.3Abstract information on impacts of executed actions  p. 51
5.2.2.2.4Triggering execution of AI/ML inference functions or ML entities  p. 52
5.2.2.2.5Orchestrating decisions of AI/ML inference functions or ML entities  p. 52
5.2.2.3Potential requirements  p. 52
5.2.2.4Possible solutions  p. 53
5.2.2.5Evaluation  p. 58
5.2.3Coordination between the ML capabilities  p. 59
5.2.3.1Description  p. 59
5.2.3.2Use cases  p. 59
5.2.3.2.1Alignment of the ML capability between 5GC/RAN and 3GPP management system  p. 59
5.2.3.3Potential requirements  p. 59
5.2.3.4Possible solutions  p. 60
5.2.3.4.1Possible solution #1  p. 60
5.2.3.5Evaluation  p. 60
5.2.4ML model loading  p. 60
5.2.4.1Description  p. 60
5.2.4.2Use cases  p. 61
5.2.4.2.1ML model loading control and monitoring  p. 61
5.2.4.3Potential requirements  p. 61
5.2.4.4Possible solutions  p. 62
5.2.4.4.1NRM based solution  p. 62
5.2.4.5Evaluation  p. 63
5.2.5ML inference emulation  p. 63
5.2.5.1Description  p. 63
5.2.5.2Use cases  p. 64
5.2.5.2.1AI/ML inference emulation  p. 64
5.2.5.2.2Managing ML inference emulation  p. 64
5.2.5.3Potential requirements  p. 64
5.2.5.4Possible solutions  p. 65
5.2.5.5Evaluation  p. 66
5.2.6Performance evaluation for AI/ML inference  p. 67
5.2.6.1Description  p. 67
5.2.6.2Use cases  p. 67
5.2.6.2.1AI/ML performance evaluation in inference phase  p. 67
5.2.6.2.2ML model performance indicators query and selection for AI/ML inference  p. 67
5.2.6.2.3ML model performance indicators selection based on MnS consumer policy for AI/ML inference  p. 68
5.2.6.2.4AI/ML abstract performance  p. 68
5.2.6.3Potential requirements  p. 68
5.2.6.4Possible solutions  p. 69
5.2.6.4.1Possible solutions for AI/ML performance evaluation in inference phase  p. 69
5.2.6.4.2Possible solutions for ML model performance indicators query and selection for AI/ML inference  p. 70
5.2.6.4.3Possible solutions for policy-based performance indicator selection based on MnS consumer policy for AI/ML inference  p. 70
5.2.6.4.4Possible solutions for AI/ML performance abstraction  p. 70
5.2.6.5Evaluation  p. 71
5.2.7Configuration management for AI/ML inference phase  p. 72
5.2.7.1Description  p. 72
5.2.7.2Use cases  p. 72
5.2.7.2.1ML model configuration for RAN domain ES initiated by consumer  p. 72
5.2.7.2.2ML model configuration for RAN domain ES initiated by producer  p. 73
5.2.7.2.3Partial activation of AI/ML inference capabilities  p. 73
5.2.7.2.4Configuration for AI/ML inference initiated by MnS consumer  p. 74
5.2.7.2.5Configuration for AI/ML inference selected by producer  p. 74
5.2.7.2.6Enabling policy-based activation of AI/ML capabilities  p. 74
5.2.7.3Potential requirements  p. 74
5.2.7.4Possible solutions  p. 75
5.2.7.4.1AI/ML inference function configuration  p. 75
5.2.7.4.1.1Configuration for AI/ML inference initiated by MnS consumer  p. 75
5.2.7.4.1.2Configuration for AI/ML inference selected by producer - Context-specific configuration  p. 75
5.2.7.4.2AI/ML activation  p. 76
5.2.7.4.2.1General framework for activation and deactivation  p. 76
5.2.7.4.2.2Instant activation and deactivation  p. 76
5.2.7.4.2.3Policy based activation and deactivation  p. 76
5.2.7.4.2.4Schedule based activation and deactivation  p. 76
5.2.7.4.2.5Gradual activation and deactivation  p. 77
5.2.7.5Evaluation  p. 78
5.2.8AI/ML update control  p. 78
5.2.8.1Description  p. 78
5.2.8.2Use cases  p. 78
5.2.8.2.1Availability of new capabilities or ML entities  p. 78
5.2.8.2.2Triggering ML model update  p. 78
5.2.8.3Potential requirements  p. 79
5.2.8.4Possible solutions  p. 79
5.2.8.5Evaluation  p. 80
5.3Common management capabilities for ML training and AI/ML inference phase  p. 80
5.3.1Trustworthy Machine Learning  p. 80
5.3.1.1Description  p. 80
5.3.1.2Use cases  p. 81
5.3.1.2.1AI/ML trustworthiness indicators  p. 81
5.3.1.2.2AI/ML data trustworthiness  p. 81
5.3.1.2.3ML training trustworthiness  p. 82
5.3.1.2.4AI/ML inference trustworthiness  p. 82
5.3.1.2.5Assessment of AI/ML trustworthiness  p. 82
5.3.1.3Potential requirements  p. 83
5.3.1.4Possible solutions  p. 84
5.3.1.4.1ML trustworthiness indicators  p. 84
5.3.1.4.2AI/ML data trustworthiness  p. 85
5.3.1.4.3ML training trustworthiness  p. 86
5.3.1.4.4AI/ML inference trustworthiness  p. 86
5.3.1.4.5Assessment of AI/ML trustworthiness  p. 87
5.3.1.5Evaluation  p. 87
6Deployment scenarios  p. 88
7Conclusions and recommendations  p. 91
AUML source codes  p. 92
$Change history  p. 94

Up   Top