| |
| Figure 4.1-1 | Anchor architecture |
| Figure 4.2-1 | Split inference intermediate data testbed architecture |
| Figure 4.3-1 | Model data testbed architecture |
| Table 4.4-1 | Confusion matrix |
| Figure 5.2.2-1 | Transmission of the ASR model |
| Figure 5.2.2-2 | Prediction of a transcript with the reconstructed ASR model |
| Table 5.2.4-1 | Number of parameters (numParam), size (sizeAnc) and word error rates (werAnc) of the anchor models |
| Table 5.2.8-1 | Datasets considered in the scenario |
| Figure 5.2.8-1 | Exemplary python script for determining sizeAnc and werAnc |
| Table 5.2.9-1 | Test cases and respective wer ranges |
| Figure 5.2.9-1 | Example for the characterization of a compression method for different test configurations T |
| Table 5.2.14-1 | Enabled NNC tools as described in [19], other parameters are set to NNCodec's default values |
| Figure 5.2.14-1 | Compressed model size and model performance achieved for different QPs |
| Table 5.3.4-1 | DNN models used for scenario 2 |
| Figure 5.3.5-1 | Testbed architecture for scenario 2 |
| Figure 5.3.6-1 | Testbed configuration |
| Figure 5.3.9.1-1 | ONNX extract_model function |
| Figure 5.3.9.2-1 | VGG16 layers visualisation with Netron |
| Figure 5.3.9.2-2 | Split VGG16 at node 5 "vgg0_conv2_fwd" split |
| Figure 5.3.9.3-1 | Split illustration of resnet model at node 6 with Netron |
| Table 5.3.9.4-1 | Split operations with ONNX model files |
| Table 5.3.9.6.1-1 | Example tensor size calculations |
| Figure 5.3.9.6.1-1 | Intermediate data size and number of branches per node |
| Table 5.3.9.6.1-2 | Example tensors obtained |
| Figure 5.3.9.6.2-1 | Inference experimentation with ssd_resnet with images having various dimensions |
| Figure 5.3.9.6.2-2 | Inference experimentation with retinanet with images having various dimensions |
| Figure 5.3.9.6.2-3 | Inference experimentation with retinanet with images having various dimensions (Bar graph) |
| Table 5.3.9.7-1 | Multi-branch script results for ssd_resnet |
| Table 5.3.9.7-2 | Multi-branch script results for retinanet - input image dimension 640x428 |
| Table 5.3.9.8-1 | Scripts predictions for ssd_resnet and retinanet |
| Table 5.3.9.9-1 | List of 50 selected images for the experiment |
| Figure 5.3.9.9-1 | ssd_resnet map score prediction on dataset 50 selected images |
| Figure 5.3.9.9-2 | retiananet map score prediction on dataset 50 selected images |
| Figure 5.3.9.9-3 | ssd_resnet map score prediction on dataset 50 selected images with split at node 10 |
| Figure 5.3.9.9-4 | retinanet map score prediction on dataset 50 selected images with split at node 1000 |
| Figure 5.3.9.9-5 | ssd_resnet map score prediction on dataset 50 selected images with 7 splits |
| Figure 5.3.9.9-6 | ssd_resnet map score prediction on dataset 50 selected images with 7 splits -zoom X-axis |
| Figure 5.3.9.9-7 | retinanet map score prediction on dataset 50 selected images with 7 splits |
| Figure 5.3.9.9-8 | retinanet map score prediction on dataset 50 selected images with 7 splits - zoom X-axis |
| Figure 5.3.9.9-9 | Compression performance with ssd_resnet |
| Figure 5.3.9.9-10 | Compression performance with retinanet |
| Figure 5.4.3-1 | Feature extractor part (VGG16) of the model used in this scenario. The light green part of each cube demonstrates the convolution layer, and the dark green part of the cube shows the ReLu layer. The brown cube determines the MaxPool layer |
| Table 5.4.3-1 | Dimensions of each convolutional layer (in_channel, out_channel, kernel_height,kernel_width) of the feature extractor part of the model |
| Figure 5.4.4-1 | Architecture of the scenario |
| Figure 5.5.2-1 | Showing flow of the scenario |
| Figure 5.5.4-1 | Architecture of the model for the scenario |
| Figure 5.5.6-1 | Configuration 1: Fulling processing at MF/MRF |
| Figure 5.5.6-2 | Configuration 2: Split processing at UE1 and UE2 |
| Figure 5.5.6-3 | Configuration 3: Split processing at UE1, MF/MRF, and UE2 |
| Table 5.5.8.1-1 | Average number of syllables spoken per second for different languages |
| Table A.2-1 | Split point decision factors |
| Figure B.2.3-1 | The main evaluation process (simplified pseudo-code) |
| Table B.2.4-1 | Configuration parameters |
| Table B.2.4-2 | Implemented scenarios and compression methods |
| Table B.2.6-1 | Results written to the csv-file |
| Figure B.2.8-1 | Interface required to be implement for new scenarios |
| Figure B.2.8-2 | Interface required to be implemented for new compression methods |