Tech-
invite
3GPP
space
IETF
space
◀
▶
21
22
23
24
25
26
27
28
29
31
32
33
34
35
36
37
38
4‑5x
TR 28.858
Study on Artificial Intelligence / Machine Learning (AI/ML) management Phase 2
3GPP‑Page
fToC
↓
Partial Content
→
V19.0.0 (Wzip)
2024/12 47 p.
Rapporteur:
Dr. Al-kanani, Hassan
NEC Europe Ltd
full Table of Contents for
TR 28.858
Word version: 19.0.0
each clause number in
'red'
refers to the equivalent title in the Partial Content
1
Scope
p. 9
2
References
p. 9
3
Definitions of terms, symbols and abbreviations
p. 9
3.1
Terms
p. 9
3.2
Symbols
p. 10
3.3
Abbreviations
p. 10
4
Concepts and overview
p. 10
4.1
Overview
p. 10
4.2
Energy consumption of AI/ML
p. 11
4.3
Management of AI/ML capabilities for RAN and 5GC
p. 12
4.3.1
Management of ML model training and AI/ML inference function for RAN
p. 12
4.3.1.1
Managing NG-RAN AI/ML based Coverage and Capacity Optimization
p. 12
4.3.1.2
Managing NG-RAN AI/ML-based Network Slicing
p. 12
4.3.2
Management of ML model training and AI/ML inference function for 5GC
p. 12
4.4
AI/ML trustworthiness
p. 12
5
Management capabilities for AI/ML lifecycle
p. 13
5.1
ML model training
p. 13
5.1.1
ML-Knowledge-based Transfer Learning
p. 13
5.1.1.1
Description
p. 13
5.1.1.2
Use cases
p. 14
5.1.1.2.1
Discovering sharable Knowledge
p. 14
5.1.1.2.2
Knowledge sharing and transfer learning
p. 14
5.1.1.3
Potential requirements
p. 15
5.1.1.4
Possible solutions
p. 15
5.1.1.5
Evaluation
p. 16
5.1.2
ML pre-training
p. 16
5.1.2.1
Description
p. 16
5.1.2.2
Use cases
p. 17
5.1.2.2.1
Consumer requested ML pre-training
p. 17
5.1.2.3
Potential requirements
p. 17
5.1.2.4
Possible solutions
p. 17
5.1.2.4.1
Possible solution #1
p. 17
5.1.2.4.2
Possible solution #2
p. 17
5.1.2.5
Evaluation
p. 18
5.1.3
ML Fine-tuning
p. 18
5.1.3.1
Description
p. 18
5.1.3.2
Use cases
p. 18
5.1.3.2.1
ML fine-tuning for a pre-trained ML model
p. 18
5.1.3.3
Potential requirements
p. 18
5.1.3.4
Possible solution
p. 18
5.1.3.4.1
Possible solution #1
p. 18
5.1.3.4.2
Possible solutions #2
p. 19
5.1.3.5
Evaluation
p. 19
5.1.4
ML model training for multiple contexts
p. 19
5.1.4.1
Description
p. 19
5.1.4.2
Use cases
p. 19
5.1.4.2.1
ML model training for multiple contexts
p. 19
5.1.4.3
Potential Requirements
p. 20
5.1.4.4
Possible solutions
p. 20
5.1.4.5
Evaluation
p. 20
5.1.5
ML training data statistics
p. 20
5.1.5.1
Description
p. 20
5.1.5.2
Use Cases
p. 20
5.1.5.2.1
Training data statistical properties for ML training
p. 20
5.1.5.3
Potential requirements
p. 21
5.1.5.4
Possible solutions
p. 21
5.1.5.5
Evaluation
p. 21
5.1.6
ML model confidence
p. 21
5.1.6.1
Description
p. 21
5.1.6.2
Use Cases
p. 21
5.1.6.2.1
Model Confidence Threshold in ML Training
p. 21
5.1.6.3
Potential requirements
p. 22
5.1.6.4
Possible solutions
p. 22
5.1.6.5
Evaluation
p. 22
5.1.7
Management of Reinforcement Learning
p. 22
5.1.7.1
Description
p. 22
5.1.7.2
Use cases
p. 23
5.1.7.2.1
Exploration in Reinforcement Learning
p. 23
5.1.7.2.2
Training Conflict in Reinforcement Learning
p. 23
5.1.7.3
Potential Requirements
p. 23
5.1.7.4
Possible solutions
p. 24
5.1.7.4.1
Possible solution #1: Exploration in Reinforcement Learning
p. 24
5.1.7.4.2
Possible solution #2: Training Conflict in Reinforcement Learning
p. 24
5.1.7.5
Evaluation
p. 24
5.1.7.5.1
Exploration in Reinforcement Learning
p. 24
5.1.7.5.2
Training Conflict in Reinforcement Learning
p. 25
5.1.8
Sustainable AI/ML for ML training
p. 25
5.1.8.1
Description
p. 25
5.1.8.2
Use cases
p. 25
5.1.8.2.1
AI/ML energy consumption evaluation and reporting for ML model training
p. 25
5.1.8.3
Potential Requirements
p. 25
5.1.8.4
Possible solutions
p. 26
5.1.8.4.1
Possible solution #1
p. 26
5.1.8.4.2
Possible solution #2
p. 26
5.1.8.5
Evaluation
p. 26
5.1.9
ML model distributed training
p. 26
5.1.9.1
Description
p. 26
5.1.9.2
Use cases
p. 26
5.1.9.2.1
ML model distributed training
p. 26
5.1.9.3
Potential requirements
p. 27
5.1.9.4
Possible solutions
p. 27
5.1.9.4.1
ML model distributed training
p. 27
5.1.9.5
Evaluation
p. 27
5.1.10
Management of Federated Learning
p. 27
5.1.10.1
Description
p. 27
5.1.10.2
Use cases
p. 28
5.1.10.2.1
Management of different roles in Federated Learning
p. 28
5.1.10.3
Potential requirements
p. 28
5.1.10.4
Possible solutions
p. 29
5.1.10.5
Evaluation
p. 29
5.1.11
ML Authentication
p. 30
5.1.11.1
Description
p. 30
5.1.11.2
Potential Requirements
p. 30
5.1.11.3
Possible Solution
p. 30
5.1.11.4
Evaluation
p. 30
5.1.12
AI/ML prediction latency
p. 30
5.1.12.1
Description
p. 30
5.1.12.2
Use cases
p. 30
5.1.12.2.1
AI/ML prediction latency during ML model training
p. 30
5.1.12.3
Potential requirements
p. 31
5.1.12.4
Possible solutions
p. 31
5.1.12.5
Evaluation
p. 31
5.2
AI/ML inference emulation
p. 31
5.2.1
ML inference emulation
p. 31
5.2.1.1
Description
p. 31
5.2.1.2
Use cases
p. 31
5.2.1.2.1
AI/ML inference emulation
p. 31
5.2.1.2.2
Managing ML inference emulation
p. 32
5.2.1.3
Potential requirements
p. 32
5.2.1.4
Possible solutions
p. 33
5.2.1.5
Evaluation
p. 34
5.2.2
ML inference emulation environment selection
p. 34
5.2.2.1
Description
p. 34
5.2.2.2
Use cases
p. 34
5.2.2.2.1
ML inference emulation environment selection
p. 34
5.2.2.3
Potential requirements
p. 34
5.2.2.4
Possible solutions
p. 35
5.2.2.5
Evaluation
p. 35
5.3
AI/ML deployment
p. 35
5.3.1
Enhance the ML model loading use case
p. 35
5.3.1.1
Description
p. 35
5.3.1.2
Use cases
p. 35
5.3.1.3
Potential requirements
p. 35
5.3.1.4
Possible solutions
p. 35
5.3.2
Managing ML Model Transfer/delivery
p. 35
5.3.2.1
Description
p. 35
5.3.2.2
Use cases
p. 36
5.3.2.2.1
Relation of ML model delivery in RAN to ML model loading in SA5
p. 36
5.3.2.3
Potential Requirements
p. 36
5.3.2.4
Possible solutions
p. 36
5.3.2.5
Evaluation
p. 36
5.4
AI/ML inference
p. 36
5.4.1
Coordination between the ML capabilities
p. 36
5.4.1.1
Description
p. 36
5.4.1.2
Use cases
p. 37
5.4.1.2.1
Alignment of the ML capability between 5GC/RAN and 3GPP management system
p. 37
5.4.1.3
Potential requirements
p. 37
5.4.1.4
Possible solutions
p. 37
5.4.1.4.1
Possible solution #1
p. 37
5.4.1.5
Evaluation
p. 38
5.4.2
Sustainable AI/ML for AI/ML inference
p. 38
5.4.2.1
Description
p. 38
5.4.2.2
Use cases
p. 38
5.4.2.2.1
AI/ML energy consumption evaluation and reporting for AI/ML inference
p. 38
5.4.2.3
Potential requirements
p. 39
5.4.2.4
Possible solutions
p. 39
5.4.2.4.1
Possible solution #1
p. 39
5.4.2.4.2
Possible solution #2
p. 39
5.4.2.5
Evaluation
p. 39
5.4.3
ML remedial action management
p. 39
5.4.3.1
Description
p. 39
5.4.3.2
Use cases
p. 39
5.4.3.2.1
ML remedial actions due to performance degradation and energy consumption
p. 39
5.4.3.3
Potential requirements
p. 40
5.4.3.4
Possible solutions
p. 40
5.4.3.5
Evaluation
p. 40
5.4.4
Managing ML models in use in a live network
p. 40
5.4.4.1
Description
p. 40
5.4.4.2
Use Cases
p. 40
5.4.4.2.1
Handling of underperforming ML trained models in live networks
p. 40
5.4.4.2.2
Performance monitoring of Network Functions with ML trained models in live networks
p. 40
5.4.4.3
Potential requirements
p. 41
5.4.4.4
Possible solutions
p. 41
5.4.4.5
Evaluation
p. 42
5.4.5
AI/ML prediction latency
p. 42
5.4.5.1
Description
p. 42
5.4.5.2
Use cases
p. 42
5.4.5.2.1
AI/ML prediction latency during inference
p. 42
5.4.5.3
Potential requirements
p. 42
5.4.5.4
Possible solutions
p. 42
5.4.5.5
Evaluation
p. 42
5.4.6
ML explainability
p. 42
5.4.6.1
Description
p. 42
5.4.6.2
Use cases
p. 43
5.4.6.2.1
Local explanation in AI/ML inference
p. 43
5.4.6.3
Potential requirements
p. 43
5.4.6.4
Possible solutions
p. 43
5.4.6.5
Evaluation
p. 43
6
Conclusions and recommendations
p. 43
$
Change history
p. 45