Tech-invite3GPPspaceIETFspace
21222324252627282931323334353637384‑5x

Content for  TS 23.482  Word version:  19.3.0

Top   Top   Up   Prev   Next
0…   5…   8…   8.3…   8.6…   8.7…   8.9…   8.11…   8.13…   8.14…   8.15…   8.19…   8.23…   9…   9.3…   9.4…   A…   B   C…

 

8.23  AIMLE assisted ML model selectionp. 115

8.23.1  Generalp. 115

ML model selection is an important consideration for successful ML training and deployment. Many ML models exist for ML applications and some models can generate better results than other models for a particular dataset. The procedure allows AIMLE service consumers to request assistance from an AIMLE server with the selection of appropriate ML models for a given dataset and for the provided requirements. The AIMLE server coordinates the selection of candidate ML models and training the ML models with the given dataset. A list of ML models with corresponding performance is return to the AIMLE service consumer.
The following clauses specify procedures, information flows, and APIs for ML model selection.
Up

8.23.2  Procedurep. 115

Pre-conditions:
  1. The AIMLE service consumer has identified datasets and ML requirements.
Copy of original 3GPP image for 3GPP TS 23.482, Fig. 8.23.2-1: AIMLE assisted ML model selection
Figure 8.23.2-1: AIMLE assisted ML model selection
(⇒ copy of original 3GPP image)
Up
Step 1.
An AIMLE service consumer (e.g. VAL server) sends a subscription request to an AIMLE server. The request includes information as described in Table 8.23.3.1-1. The AIMLE service consumer provides a list of candidate ML models, dataset identifiers, and requirements for training the candidate ML models. The AIMLE service consumer can either provide AIMLE client set identifiers or AIMLE client selection criteria for selecting the AIMLE clients to train the candidate ML models.
Step 2.
The AIMLE server authenticates the requestor and checks authorization for the request. If authorized, the AMILE server assigns an identifier for the subscription.
Step 3.
The AIMLE server sends a ML model selection subscription response that includes information in Table 8.23.3.2-1.
Step 4.
The AIMLE server determines if additional ML models can be candidates for the type of ML application based on the ML model requirements provided in step 1 and selects additional candidate ML models to train with the given dataset. The AIMLE server can discover models from ML repository to determine the list of candidate ML models as described in clause 8.11.3.
Step 5.
The AIMLE server performs ML model training for each candidate model. ML model training can be for split AI/ML operation as described in clause 8.14, Transfer Learning as described in clause 8.16, or Federated Learning as described in clauses 8.12 and 8.18. If AIMLE client selection criteria were provided in step 1, the AIMLE server performs AIMLE client selection as described in clauses 8.9 or 8.13 during the training of the candidate ML models.
Step 6.
The AIMLE server performs ML model information storage as described in clause 8.11 for each trained ML model.
Step 7.
The AIMLE server aggregates and determines the performance of each ML model with the given dataset.
Step 8.
The AIMLE server sends a notification to the AIMLE service consumer and include information as described in Table 8.23.3.3-1. The notification includes a list of trained candidate ML models with corresponding model information and performance. The AIMLE service consumer can then select the best performing ML models from the list provided in the notification.
Up

8.23.3  Information flowsp. 117

8.23.3.1  AIMLE assisted ML model selection subscription requestp. 117

Table 8.23.3.1-1 shows the request sent by an AIMLE service consumer to an AIMLE server for the AIMLE assisted ML model selection subscription procedure.
Information element Status Description
Requestor identifierMThe identifier of the requestor.
AIML profileMRequirements for the ML model selection operation.
> Candidate ML modelsMA list of ML model identifiers (and initial model parameters) to train. The list provides candidate ML models to evaluate against the provided dataset.
> ML model requirementsO ML model requirements for the AIMLE server to use for selecting additional candidate ML models for training with the provided datasets. The requirements can be any of the ML model information as described in Table 8.11.4.1-2.
> AIMLE client set identifiersO
(NOTE 1)
A list of AIMLE client set identifiers to train the ML model.
> AIMLE client selection criteriaO
(NOTE 1)
Selection criteria for finding suitable AIMLE clients for training the ML model.
> Number of required AIMLE clientsO
(NOTE 2)
A minimal number of AIMLE clients required for training the ML model.
> Dataset identifiersMDataset identifiers to use for training and evaluating model performance to obtain a list of ML model rankings.
> Training requirementsM Training requirements as detailed in Table 8.23.3.1-2.
Notification targetOEndpoint information for receiving notifications.
Notification settingsONotification settings for which the AIMLE server provides ML model status: after, after certain job percentage completion, periodically based on date and time, upon error events, etc.
NOTE 1:
At least one of the information elements shall be provided.
NOTE 2:
Mandatory if AIMLE client selection criteria are present.
Information element Status Description
Performance metricMIdentifies the performance metric to evaluate ML model training. Performance metric can be mean absolute error, mean squared error, accuracy, precision and recall, etc. The performance metric indicates the performance of the ML model.
Performance targetOA target performance that indicates acceptable performance has been reached and training can be stopped.
Number of training roundsMA minimum number of training rounds for the ML training.
Number of data samplesMA minimum number of data samples for the ML training.
Up

8.23.3.2  ML model selection subscription responsep. 117

Table 8.23.3.2-1 shows the response sent by the AIMLE server to the AIMLE service consumer for the AIMLE assisted ML model selection subscription procedure.
Information element Status Description
StatusMThe status for the ML model selection operation.
Subscription identifierMAn identifier for the subscription.
Up

8.23.3.3  ML model selection notificationp. 118

Table 8.23.3.3-1 shows the notification sent by the AIMLE server to the AIMLE service consumer for the AIMLE assisted ML model selection subscription procedure.
Information element Status Description
Subscription identifierMThe identifier for the subscription that notification is associated with.
Operational statusMThe status for the ML model selection operation. The status can represent the estimate percentage completion or associated with the notification settings.
Trained ML modelsMThe results of the ML model training.
> ML model informationM Information about the ML model such as the ML model type as described in Table 8.11.4.1-2.
> Model performanceMThe performance metric for training the ML model.
Elapse timeOThe time that has elapsed for the ML model selection operation.
TimestampOTimestamp of the notification.
Up

8.24  AIMLE context transferp. 118

8.24.1  Generalp. 118

This clause describes AIMLE context transfer procedure between AIMLE servers (over AIML-E reference point).

8.24.2  Procedurep. 118

Pre-conditions:
  1. Each edge AIMLE server manages the AIMLE clients within its service area to perform AI/ML operations.
  2. A UE associated with an AIMLE client moves from a source service area (managed by a source edge AIMLE server) to a target service area (managed by a target edge AIMLE server). The transition triggers application context relocation (ACR) procedure between the two edge AIMLE servers (as source EAS and target EAS, respectively) as specified in TS 23.558.
Copy of original 3GPP image for 3GPP TS 23.482, Fig. 8.24.2-1: AIMLE context transfer
Figure 8.24.2-1: AIMLE context transfer
(⇒ copy of original 3GPP image)
Up
Step 1.
The source edge AIMLE server sends an AIMLE context transfer request to the target AIMLE server as described in Table 8.24.3.1-1. The request includes AIMLE context information which is generated based on the responses/notifications received from the transitioned AIMLE client (e.g. step 7 of clause 8.12.2 or step 3 of clause 8.20.1).
The AIMLE context information is used by the target edge AIMLE server to determine whether the information (e.g. AI/ML operation output/result) received from the AIMLE client should be transferred to the source edge AIMLE server (or another edge AIMLE server that has been associated with the AIMLE client). For example, the target AIMLE server can forward the AIMLE service results received from the AIMLE client to the source AIMLE server if the results are only applicable to the source service area or the AIMLE client is part of a split operation pipeline formed in the source service area.
Step 2.
The target edge AIMLE server sends an AIMLE context transfer response to the source edge AIMLE server as described in Table 8.24.3.2-1.
Up

8.24.3  Information flowsp. 119

8.24.3.1  AIMLE context transfer requestp. 119

Table 8.24.3.1-1 shows the AIMLE context transfer request that is sent by a source AIMLE server to a target AIMLE server.
Information element Status Description
Requestor identifierMThe identifier of the requestor (e.g., AIMLE server).
AIMLE Context InformationM The AIMLE context information as described in Table 8.24.3.1-2.
Information element Status Description
AIMLE client IDMThe identifier of the AIMLE client associated with the context.
Current managing AIMLE serverOThe identifier of the AIMLE server that is currently managing the AIMLE client, i.e. the AIMLE server associated with the service area that the AIMLE client is currently in.
Previous managing AIMLE serverOList of identifiers of AIMLE servers that have been associated with the AIMLE client. The list is populated by adding the identifier of the source edge AIMLE server whenever the UE transitioned from a source edge area to a target edge area.
AIMLE service statusO Status of the AIML operations (task) at the AIMLE client, e.g. "active", "paused", "completed", percentage of completion
AIMLE service resultsOResults of the AIML operations (task) performed by the AIMLE client.
AIMLE service applicabilityOApplicability information of the AIML operations performed by the AIMLE client, e.g. the operation results are applicable within a certain edge service area, the operations are applicable within a certain split operation pipeline.
> ML context informationOContext information related to the ML operation that the AIMLE client is participating in or performing.
>> VAL service InformationOInformation related to the VAL service for which the AIMLE task is performed (e.g., the VAL service identifier for the AIMLE HFL training operation).
>> ML taskOType of ML task (model training, model testing, model inference, model transfer, model offload, model split, intermediate AI/ML operation/task) to be continued at the target AIMLE server.
>> ML task informationO Information related to the ML task mentioned in "ML task" information element.
The Model Training task Information may include training objective to be achieved, HFL training information, VFL training information, data set information for training, status of training operation at AIMLE client (e.g. "active", "paused", "completed"), training results, etc.
The Model Inference task information may include Inference results, inference job id, etc.
The Model split task information may include split operation profile as specified in Table 8.14.3.3-2.
>> ML model informationMModel information related to the ML task. This information may include, the model identifier, Information to fetch ML model information, address (e.g., a URL or an FQDN) of the ML model file or address of the model repository where the ML model resides, Model parameters from ML training, etc).
Up

8.24.3.2  AIMLE context transfer responsep. 120

Table 8.24.3.2-1 shows the AIMLE context transfer response that is sent by the target edge AIMLE server to the source edge AIMLE server.
Information element Status Description
Successful responseO
(NOTE)
Indicates that the request was successful.
Failure responseO
(NOTE)
Indicates that the request failed.
> CauseOIndicates the cause of request failure.
NOTE:
One of the IEs shall be present.
Up

8.25  Support of AIML Services for Assisting Hierarchical Computingp. 121

This clause describes procedure for supporting AIML services for assisting hierarchical computing.

8.25.1  Generalp. 121

This clause describes the procedure for assisting hierarchical computing by the AIMLE server. A entity (e.g. CAS or EAS which are defined in TS 23.558) can have different roles in hiearchitical computing (with one root node (e.g., CAS, EAS), the root has one or more children which are also known as sub-root node(s) (e.g., EAS), and multiple leaf nodes (e.g. EAS) with no children). Here, hierarchical computing represents a computation architecture with multiple computation entities involved and multiple levels of computations for a computation task.
Up

8.25.2  Procedure for assisting hierarchical computing processp. 121

Figure 8.25.2-1 illustrates the procedures for AIMLE server to assist a hierarchical computing process.
Pre-conditions:
  1. An AI/ML task be treated as a specical computing task being completed at consumer (e.g., CAS, EAS).
  2. The consumer decides its role in the hierarchical computing architecture for an AI/ML task (e.g. FL training) based on its local configuration.
  3. The AIMLE server can assist a hierarchical computing process by providing time window(s) recommendation for computing task distribution if the consumer is a root node in a hierarchical computing process, providing time window(s) recommendation for intermediate output delivery if the consumer is a leaf node in a hierarchical computing process, and candidate execution node list provisioning or computing preparation status provisioning, etc.
  4. The consumer decides that assistance from AIMLE server to support the hierarchical computing process (AI/ML task) is needed, due to lack of capability on e.g. execution node selection.
  5. AIMLE Server is deployed following the hierarchical deployment model described in clause A.4.
Copy of original 3GPP image for 3GPP TS 23.482, Fig. 8.25.2-1: Procedure for assist a hiearchitical computing process
Up
Figure 8.25.2-1 illustrates the procedure for AIMLE servers assist a hierarchical computing process. The corresponding procedure in detail is as follows:
Step 1.
The VAL server (e.g. CAS, EAS) sends hierarchical computing assistance request to the edge AIMLE server for a hierarchical computing process (AI/ML task). The request message includes information as described in Table 8.25.3.1-1.
Step 2.
The edge AIMLE server authenticates and authorizes the request from the consumer and checks its capability to provide the requested assistance information.
If the request is authorized and the edge AIMLE server can generate the required assistance information (e.g. computing preparation status at an execution node which is registered to it), it performs step 3 and 4.
If the request is authorized but the edge AIMLE server cannot generate the required assistance information (e.g. execution nodes are registered to different edge AIMLE servers), it performs step 5 and skips steps 3 and 4.
Step 3.
The edge AIMLE server performs operations to generate assistance information which is requested in step 1. For example, for computing preparation status at an execution node, the edge AIMLE server may subscribe/request analytics from ADAE server (e.g. edge load analytics, edge computing preparation analytics) and aggregate the information received to generate assistance information.
Step 4.
The edge AIMLE server sends hierarchical computing assistance response to the consumer (e.g. computing preparation status at the execution node). The response message contains the information as described in Table 8.25.3.2-1.
Step 5.
If the edge AIMLE server cannot generate the required assistance information, it sends hierarchical computing assistance request to central AIMLE server. The request message includes information as described in Table 8.25.3.1-1.
Step 6.
The central AIMLE server authenticates and authorizes the request from the edge AIMLE server.
Step 7.
If the request is authorized, the central AIMLE server performs operations to generate assistance information which is requested in step 1.
For example, for execution node selection, according to the role of the VAL server in request message in step 1, the central AIMLE server derives the requirements on the execution node (e.g. high computation capability, high communication capability, or both). Then, the central AIMLE server may subscribe/request analytics from ADAE server (e.g. edge load analytics, edge computing preparation analytics), trigger generation of split operation assistance information, retrieve FL member information from ML Repository (the FL member with EAS ID). The existing services can be reused for execution node selection, e.g., the step 3 in clause 8.12.2.
Step 8.
The central AIMLE server sends hierarchical computing assistance response with the generated assistance information (e.g. a list of selected candidate execution nodes). The response message contains the information as described in Table 8.25.3.2-1.
Step 8a.
The central AIMLE server sends the hierarchical computing assistance response to the edge AIMLE server.
Step 8b.
The edge AIMLE server sends the hierarchical computing assistance response to the consumer.
The VAL server (e.g. CAS, EAS) uses the assistance information for decision making on its computing operations.
Up

8.25.3  Information flowsp. 123

8.25.3.1  Hiearchitical computing assistance requestp. 123

Table 8.25.3.1-1 shows the request sent by VAL server (e.g. CAS, EAS) to the AIMLE server for assistance of a hiearchitical computing process (AI/ML task).
Information element Status Description
Requestor identifierMThe identifier of the requestor.
Original requestor identifierOThe identifier of the original requestor, e.g. VAL server ID, EAS ID, CAS ID.
RoleMRepresents the role of the VAL server in a hiearchitical computing architecture (e.g. root node, sub-root node or leaf node of a hierarchical computing process).
Computing task typeMThe type of computing task (e.g. VFL, HFL).
Assistance information typeMRepresents the assistance information type, which is used to indicate the assistance information needed, e.g. candidate execution node list, computing preparation status at an execution node.
Execution node(s)ORepresent one execution node or a list of candidate execution nodes.
Up

8.25.3.2  Assist hierarchical computing responsep. 124

Table 8.25.3.2-1 shows the response sent by AIMLE server to the VAL server (e.g. CAS, EAS) for assisting the hierarchical computing process.
Information element Status Description
Success responseO
(NOTE 1)
Indicates that the assist hierarchical computing request was successful.
> Assistance informationMThe assistance information for assisting hierarchical computing process, e.g. candidate execution node list, computing preparation status at an execution node.
>> List of candidate execution nodesO
(NOTE 2)
A list of selected candidate execution nodes.
>> Preparation statusO
(NOTE 2)
Computing preparation status at the execution node provided in request.
Failure responseO
(NOTE 1)
Indicates that the assist hierarchical computing request was failure.
> CauseMReason for the failure.
NOTE 1:
One of the IEs shall be present.
NOTE 2:
At least one of the IEs shall be present.
Up

Up   Top   ToC