H610-i312100-1 Intel Core i3-12100 testing with a ASRock H610M-HDV/M.2 R2.0 (6.03 BIOS) and Intel ADL-S GT1 3GB on Ubuntu 20.04 via the Phoronix Test Suite. Intel ADL-S GT1 - Intel Core i3-12100: Processor: Intel Core i3-12100 @ 4.30GHz (4 Cores / 8 Threads), Motherboard: ASRock H610M-HDV/M.2 R2.0 (6.03 BIOS), Chipset: Intel Device 7aa7, Memory: 4096MB, Disk: 1000GB Western Digital WDS100T2B0A, Graphics: Intel ADL-S GT1 3GB (1400MHz), Audio: Realtek ALC897, Network: Realtek RTL8111/8168/8411 OS: Ubuntu 20.04, Kernel: 5.15.0-89-generic (x86_64), Desktop: GNOME Shell 3.36.9, Display Server: X Server 1.20.13, OpenGL: 4.6 Mesa 21.2.6, Vulkan: 1.2.182, Compiler: GCC 9.4.0, File-System: ext4, Screen Resolution: 1366x768 PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 24.89 |================================ PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 11.34 |================================ PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 12.57 |================================ PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 12.56 |================================ PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 12.69 |================================ PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 6.33 |================================= PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 12.91 |================================ PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 6.24 |================================= PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 12.76 |================================ PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 6.23 |================================= PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 6.22 |================================= PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 6.17 |================================= PyTorch 2.2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 8.10 |================================= PyTorch 2.2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.51 |================================= PyTorch 2.2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.55 |================================= PyTorch 2.2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.52 |================================= PyTorch 2.2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.50 |================================= PyTorch 2.2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.48 |================================= PlaidML FP16: No - Mode: Inference - Network: VGG16 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 8.86 |================================= PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.62 |================================= OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 1.24 |================================= OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 10.06 |================================ OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 10.04 |================================ OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 62.61 |================================ OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 5.26 |================================= OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 330.42 |=============================== OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 18.24 |================================ OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 214.21 |=============================== OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 136.74 |=============================== OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 833.34 |=============================== OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 63.19 |================================ OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 14.95 |================================ OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 513.04 |=============================== OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 166.71 |=============================== OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 208.84 |=============================== OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 66.40 |================================ OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 201.90 |=============================== OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3481.92 |============================== OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 88.41 |================================ OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU FPS > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 10592.67 |============================= TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: VGG-16 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 1 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: AlexNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 64 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 256 - Model: ResNet-50 images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: GoogLeNet images/sec > Higher Is Better TensorFlow 2.16.1 Device: GPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 50.31 |================================ ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 51.12 |================================ ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 56.03 |================================ ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 57.59 |================================ ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 5.34494 |============================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 7.91502 |============================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 209.45 |=============================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 233.25 |=============================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 0.512476 |============================= ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 0.742532 |============================= ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 11.54 |================================ ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 17.31 |================================ ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 122.01 |=============================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 142.74 |=============================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 38.04 |================================ ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 53.26 |================================ ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.43575 |============================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 5.10110 |============================== Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.0749 |=============================== Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.3138 |=============================== Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 164.65 |=============================== Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 145.27 |=============================== Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 51.69 |================================ Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 47.55 |================================ Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 369.36 |=============================== Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 300.11 |=============================== Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 0.0070 |=============================== Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 0.0092 |=============================== Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 51.57 |================================ Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 47.47 |================================ Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 23.88 |================================ Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 23.45 |================================ Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 34.41 |================================ Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 32.24 |================================ Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 5.0039 |=============================== Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.8149 |=============================== Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 78.19 |================================ Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 64.85 |================================ Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.1179 |=============================== Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.3321 |=============================== LeelaChessZero 0.31.1 Backend: BLAS Nodes Per Second > Higher Is Better Numpy Benchmark Score > Higher Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 463.17 |=============================== AI Benchmark Alpha 0.1.2 Score > Higher Is Better Llama.cpp b3067 Model: llama-2-7b.Q4_0.gguf Tokens Per Second > Higher Is Better Llama.cpp b3067 Model: llama-2-13b.Q4_0.gguf Tokens Per Second > Higher Is Better Llama.cpp b3067 Model: llama-2-70b-chat.Q5_0.gguf Tokens Per Second > Higher Is Better Llamafile 0.8.6 Test: llava-v1.5-7b-q4 - Acceleration: CPU Tokens Per Second > Higher Is Better Llamafile 0.8.6 Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU Tokens Per Second > Higher Is Better Llamafile 0.8.6 Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU Tokens Per Second > Higher Is Better spaCy 3.4.1 tokens/sec > Higher Is Better ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 19.87 |================================ ONNX Runtime 1.17 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 19.56 |================================ ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 17.85 |================================ ONNX Runtime 1.17 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 17.36 |================================ ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 187.69 |=============================== ONNX Runtime 1.17 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 130.81 |=============================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.77472 |============================== ONNX Runtime 1.17 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.28771 |============================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 1975.36 |============================== ONNX Runtime 1.17 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 1353.24 |============================== ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 86.70 |================================ ONNX Runtime 1.17 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 57.79 |================================ ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 8.19737 |============================== ONNX Runtime 1.17 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 7.00536 |============================== ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 26.29 |================================ ONNX Runtime 1.17 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 19.21 |================================ ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 225.47 |=============================== ONNX Runtime 1.17 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 196.43 |=============================== TensorFlow Lite 2022-05-18 Model: SqueezeNet Microseconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4322.71 |============================== TensorFlow Lite 2022-05-18 Model: Inception V4 Microseconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 64931.8 |============================== TensorFlow Lite 2022-05-18 Model: NASNet Mobile Microseconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 12005.2 |============================== TensorFlow Lite 2022-05-18 Model: Mobilenet Float Microseconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3574.97 |============================== TensorFlow Lite 2022-05-18 Model: Mobilenet Quant Microseconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 5198.07 |============================== TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 Microseconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 60949.0 |============================== Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 44622 |================================ Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 90229 |================================ Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 445933 |=============================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Milli-Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 103411 |=============================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Milli-Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 206462 |=============================== Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 1000 Milli-Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 1038320 |============================== oneDNN 3.4 Harness: IP Shapes 1D - Engine: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 6.00601 |============================== oneDNN 3.4 Harness: IP Shapes 3D - Engine: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 26.70 |================================ oneDNN 3.4 Harness: Convolution Batch Shapes Auto - Engine: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 44.14 |================================ oneDNN 3.4 Harness: Deconvolution Batch shapes_1d - Engine: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 10.75 |================================ oneDNN 3.4 Harness: Deconvolution Batch shapes_3d - Engine: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 10.37 |================================ oneDNN 3.4 Harness: Recurrent Neural Network Training - Engine: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 5708.97 |============================== oneDNN 3.4 Harness: Recurrent Neural Network Inference - Engine: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3071.31 |============================== Mobile Neural Network 2.9.b11b7037d Model: nasnet ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 9.072 |================================ Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 1.188 |================================ Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 2.900 |================================ Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 26.31 |================================ Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.744 |================================ Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 2.705 |================================ Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3.373 |================================ Mobile Neural Network 2.9.b11b7037d Model: inception-v3 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 33.51 |================================ NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 21.50 |================================ NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 5.35 |================================= NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3.18 |================================= NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 2.25 |================================= NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3.53 |================================= NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 8.15 |================================= NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 0.62 |================================= NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 16.13 |================================ NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 109.92 |=============================== NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 12.34 |================================ NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 12.63 |================================ NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 26.71 |================================ NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 40.12 |================================ NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 17.42 |================================ NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 6.57 |================================= NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 135.49 |=============================== NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3.31 |================================= NCNN 20230517 Target: Vulkan GPU - Model: mobilenet ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 21.05 |================================ NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 5.29 |================================= NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3.19 |================================= NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 2.21 |================================= NCNN 20230517 Target: Vulkan GPU - Model: mnasnet ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3.19 |================================= NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 7.99 |================================= NCNN 20230517 Target: Vulkan GPU - Model: blazeface ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 0.61 |================================= NCNN 20230517 Target: Vulkan GPU - Model: googlenet ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 15.83 |================================ NCNN 20230517 Target: Vulkan GPU - Model: vgg16 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 111.12 |=============================== NCNN 20230517 Target: Vulkan GPU - Model: resnet18 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 12.25 |================================ NCNN 20230517 Target: Vulkan GPU - Model: alexnet ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 12.54 |================================ NCNN 20230517 Target: Vulkan GPU - Model: resnet50 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 26.97 |================================ NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 39.80 |================================ NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 17.49 |================================ NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 6.53 |================================= NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 136.23 |=============================== NCNN 20230517 Target: Vulkan GPU - Model: FastestDet ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3.30 |================================= TNN 0.3 Target: CPU - Model: DenseNet ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 2446.58 |============================== TNN 0.3 Target: CPU - Model: MobileNet v2 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 201.11 |=============================== TNN 0.3 Target: CPU - Model: SqueezeNet v2 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 46.21 |================================ TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 166.13 |=============================== OpenVINO 2024.0 Model: Face Detection FP16 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3213.28 |============================== OpenVINO 2024.0 Model: Person Detection FP16 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 397.47 |=============================== OpenVINO 2024.0 Model: Person Detection FP32 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 398.50 |=============================== OpenVINO 2024.0 Model: Vehicle Detection FP16 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 63.85 |================================ OpenVINO 2024.0 Model: Face Detection FP16-INT8 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 759.24 |=============================== OpenVINO 2024.0 Model: Face Detection Retail FP16 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 12.09 |================================ OpenVINO 2024.0 Model: Road Segmentation ADAS FP16 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 219.10 |=============================== OpenVINO 2024.0 Model: Vehicle Detection FP16-INT8 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 18.66 |================================ OpenVINO 2024.0 Model: Weld Porosity Detection FP16 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 29.24 |================================ OpenVINO 2024.0 Model: Face Detection Retail FP16-INT8 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 4.80 |================================= OpenVINO 2024.0 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 63.27 |================================ OpenVINO 2024.0 Model: Machine Translation EN To DE FP16 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 267.44 |=============================== OpenVINO 2024.0 Model: Weld Porosity Detection FP16-INT8 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 7.79 |================================= OpenVINO 2024.0 Model: Person Vehicle Bike Detection FP16 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 23.99 |================================ OpenVINO 2024.0 Model: Noise Suppression Poconet-Like FP16 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 19.10 |================================ OpenVINO 2024.0 Model: Handwritten English Recognition FP16 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 60.21 |================================ OpenVINO 2024.0 Model: Person Re-Identification Retail FP16 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 19.80 |================================ OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 1.14 |================================= OpenVINO 2024.0 Model: Handwritten English Recognition FP16-INT8 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 45.22 |================================ OpenVINO 2024.0 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 0.37 |================================= OpenCV 4.7 Test: DNN - Deep Neural Network ms < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 27185 |================================ Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 490.80 |=============================== Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 231.81 |=============================== Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 12.13 |================================ Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 6.8777 |=============================== Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 38.68 |================================ Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 21.02 |================================ Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 5.3924 |=============================== Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3.3220 |=============================== Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 211288.58 |============================ Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 108889.85 |============================ Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 38.76 |================================ Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 21.06 |================================ Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 83.74 |================================ Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 42.63 |================================ Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 58.10 |================================ Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 31.01 |================================ Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 399.50 |=============================== Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 207.67 |=============================== Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 25.56 |================================ Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 15.41 |================================ Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 485.38 |=============================== Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 230.83 |=============================== DeepSpeech 0.6 Acceleration: CPU Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 87.52 |================================ R Benchmark Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 0.1847 |=============================== RNNoise 0.2 Input: 26 Minute Long Talking Sample Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P1B2 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B1 Seconds < Lower Is Better ECP-CANDLE 0.4 Benchmark: P3B2 Seconds < Lower Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 245.09 |=============================== Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 20.60 |================================ Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 11.70 |================================ Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 209.16 |=============================== Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 47.48 |================================ Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 52.86 |================================ Mlpack Benchmark Benchmark: scikit_ica Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 64.74 |================================ Mlpack Benchmark Benchmark: scikit_qda Seconds < Lower Is Better Mlpack Benchmark Benchmark: scikit_svm Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 14.51 |================================ Mlpack Benchmark Benchmark: scikit_linearridgeregression Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: GLM Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 611.36 |=============================== Scikit-Learn 1.2.2 Benchmark: SAGA Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 728.14 |=============================== Scikit-Learn 1.2.2 Benchmark: Tree Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 39.66 |================================ Scikit-Learn 1.2.2 Benchmark: Lasso Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 492.67 |=============================== Scikit-Learn 1.2.2 Benchmark: Glmnet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sparsify Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 74.98 |================================ Scikit-Learn 1.2.2 Benchmark: Plot Ward Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 52.22 |================================ Scikit-Learn 1.2.2 Benchmark: MNIST Dataset Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 64.56 |================================ Scikit-Learn 1.2.2 Benchmark: Plot Neighbors Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 105.44 |=============================== Scikit-Learn 1.2.2 Benchmark: SGD Regression Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 119.97 |=============================== Scikit-Learn 1.2.2 Benchmark: SGDOneClassSVM Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Lasso Path Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 226.08 |=============================== Scikit-Learn 1.2.2 Benchmark: Isolation Forest Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Fast KMeans Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Text Vectorizers Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 52.00 |================================ Scikit-Learn 1.2.2 Benchmark: Plot Hierarchical Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 186.55 |=============================== Scikit-Learn 1.2.2 Benchmark: Plot OMP vs. LARS Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 134.66 |=============================== Scikit-Learn 1.2.2 Benchmark: Feature Expansions Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 241.48 |=============================== Scikit-Learn 1.2.2 Benchmark: LocalOutlierFactor Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 83.87 |================================ Scikit-Learn 1.2.2 Benchmark: TSNE MNIST Dataset Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 324.55 |=============================== Scikit-Learn 1.2.2 Benchmark: Isotonic / Logistic Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Plot Incremental PCA Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 45.04 |================================ Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 96.58 |================================ Scikit-Learn 1.2.2 Benchmark: Plot Parallel Pairwise Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Isotonic / Pathological Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: RCV1 Logreg Convergencet Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Sample Without Replacement Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 95.57 |================================ Scikit-Learn 1.2.2 Benchmark: Covertype Dataset Benchmark Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 446.72 |=============================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Adult Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 60.21 |================================ Scikit-Learn 1.2.2 Benchmark: Isotonic / Perturbed Logarithm Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Threading Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 336.10 |=============================== Scikit-Learn 1.2.2 Benchmark: Plot Singular Value Decomposition Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 199.82 |=============================== Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Higgs Boson Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 138.51 |=============================== Scikit-Learn 1.2.2 Benchmark: 20 Newsgroups / Logistic Regression Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 49.52 |================================ Scikit-Learn 1.2.2 Benchmark: Plot Polynomial Kernel Approximation Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 233.56 |=============================== Scikit-Learn 1.2.2 Benchmark: Plot Non-Negative Matrix Factorization Seconds < Lower Is Better Scikit-Learn 1.2.2 Benchmark: Hist Gradient Boosting Categorical Only Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 14.59 |================================ Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Samples Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 276.88 |=============================== Scikit-Learn 1.2.2 Benchmark: Kernel PCA Solvers / Time vs. N Components Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 319.47 |=============================== Scikit-Learn 1.2.2 Benchmark: Sparse Random Projections / 100 Iterations Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 905.08 |=============================== Whisper.cpp 1.6.2 Model: ggml-base.en - Input: 2016 State of the Union Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 246.29 |=============================== Whisper.cpp 1.6.2 Model: ggml-small.en - Input: 2016 State of the Union Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 753.36 |=============================== Whisper.cpp 1.6.2 Model: ggml-medium.en - Input: 2016 State of the Union Seconds < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 2280.50 |============================== XNNPACK 2cd86b Model: FP32MobileNetV2 us < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 5545 |================================= XNNPACK 2cd86b Model: FP32MobileNetV3Large us < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 5484 |================================= XNNPACK 2cd86b Model: FP32MobileNetV3Small us < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 1302 |================================= XNNPACK 2cd86b Model: FP16MobileNetV2 us < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3444 |================================= XNNPACK 2cd86b Model: FP16MobileNetV3Large us < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 3396 |================================= XNNPACK 2cd86b Model: FP16MobileNetV3Small us < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 965 |================================== XNNPACK 2cd86b Model: QU8MobileNetV2 us < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 2163 |================================= XNNPACK 2cd86b Model: QU8MobileNetV3Large us < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 1974 |================================= XNNPACK 2cd86b Model: QU8MobileNetV3Small us < Lower Is Better Intel ADL-S GT1 - Intel Core i3-12100 . 728 |==================================