newa Intel Core i7-1185G7 testing with a Dell XPS 13 9310 0DXP1F (3.7.0 BIOS) and Intel Xe TGL GT2 8GB on Ubuntu 24.04 via the Phoronix Test Suite. a: Processor: Intel Core i7-1185G7 @ 4.80GHz (4 Cores / 8 Threads), Motherboard: Dell XPS 13 9310 0DXP1F (3.7.0 BIOS), Chipset: Intel Tiger Lake-LP, Memory: 8 x 2GB LPDDR4-4267MT/s, Disk: Micron 2300 NVMe 512GB, Graphics: Intel Xe TGL GT2 8GB, Audio: Realtek ALC289, Network: Intel Wi-Fi 6 AX201 OS: Ubuntu 24.04, Kernel: 6.10.0-061000rc4daily20240621-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.3~git2407210600.0cc23b~oibaf~n (git-0cc23b6 2024-07-21 noble-oibaf-ppa), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200 b: Processor: Intel Core i7-1185G7 @ 4.80GHz (4 Cores / 8 Threads), Motherboard: Dell XPS 13 9310 0DXP1F (3.7.0 BIOS), Chipset: Intel Tiger Lake-LP, Memory: 8 x 2GB LPDDR4-4267MT/s, Disk: Micron 2300 NVMe 512GB, Graphics: Intel Xe TGL GT2 8GB, Audio: Realtek ALC289, Network: Intel Wi-Fi 6 AX201 OS: Ubuntu 24.04, Kernel: 6.10.0-061000rc4daily20240621-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.3~git2407210600.0cc23b~oibaf~n (git-0cc23b6 2024-07-21 noble-oibaf-ppa), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200 c: Processor: Intel Core i7-1185G7 @ 4.80GHz (4 Cores / 8 Threads), Motherboard: Dell XPS 13 9310 0DXP1F (3.7.0 BIOS), Chipset: Intel Tiger Lake-LP, Memory: 8 x 2GB LPDDR4-4267MT/s, Disk: Micron 2300 NVMe 512GB, Graphics: Intel Xe TGL GT2 8GB, Audio: Realtek ALC289, Network: Intel Wi-Fi 6 AX201 OS: Ubuntu 24.04, Kernel: 6.10.0-061000rc4daily20240621-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.3~git2407210600.0cc23b~oibaf~n (git-0cc23b6 2024-07-21 noble-oibaf-ppa), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 1920x1200 Build2 0.17 Time To Compile Seconds < Lower Is Better a . 536.82 |=================================================================== b . 538.58 |=================================================================== c . 539.22 |=================================================================== Etcpak 2.0 Benchmark: Multi-Threaded - Configuration: ETC2 Mpx/s > Higher Is Better a . 105.94 |================================================================== b . 107.00 |=================================================================== c . 107.03 |=================================================================== Mobile Neural Network 2.9.b11b7037d Model: nasnet ms < Lower Is Better a . 13.79 |==================================================================== b . 13.12 |================================================================ c . 13.86 |==================================================================== Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 ms < Lower Is Better a . 1.712 |=================================================================== b . 1.728 |==================================================================== c . 1.711 |=================================================================== Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 ms < Lower Is Better a . 3.440 |=================================================================== b . 3.495 |==================================================================== c . 3.444 |=================================================================== Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 ms < Lower Is Better a . 20.20 |==================================================================== b . 20.07 |=================================================================== c . 20.25 |==================================================================== Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 ms < Lower Is Better a . 5.716 |================================================================== b . 5.855 |==================================================================== c . 5.806 |=================================================================== Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 ms < Lower Is Better a . 3.592 |==================================================================== b . 3.550 |=================================================================== c . 3.539 |=================================================================== Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 ms < Lower Is Better a . 4.307 |==================================================================== b . 4.309 |==================================================================== c . 4.233 |=================================================================== Mobile Neural Network 2.9.b11b7037d Model: inception-v3 ms < Lower Is Better a . 36.87 |==================================================================== b . 36.66 |=================================================================== c . 36.94 |==================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 80.91 |==================================================================== b . 74.58 |=============================================================== c . 76.19 |================================================================ ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 12.35 |=============================================================== b . 13.40 |==================================================================== c . 13.12 |=================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 103.42 |=================================================================== b . 104.10 |=================================================================== c . 103.24 |================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 9.66051 |================================================================== b . 9.59821 |================================================================= c . 9.67918 |================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 2.92997 |================================================================== b . 2.92361 |================================================================== c . 2.75465 |============================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 341.30 |=============================================================== b . 342.04 |=============================================================== c . 363.02 |=================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 4.87190 |================================================================== b . 4.77207 |================================================================= c . 4.88278 |================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 205.26 |================================================================== b . 209.55 |=================================================================== c . 204.80 |================================================================= ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 33.86 |==================================================================== b . 33.55 |=================================================================== c . 34.02 |==================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 29.53 |=================================================================== b . 29.81 |==================================================================== c . 29.39 |=================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 52.91 |==================================================================== b . 52.84 |==================================================================== c . 52.78 |==================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 18.90 |==================================================================== b . 18.92 |==================================================================== c . 18.94 |==================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 89.46 |========================================================= b . 89.45 |========================================================= c . 105.74 |=================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 11.17640 |================================================================= b . 11.17780 |================================================================= c . 9.45416 |======================================================= ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 145.39 |=================================================================== b . 143.24 |================================================================== c . 142.74 |================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 6.87576 |================================================================= b . 6.97865 |================================================================== c . 7.00365 |================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 3.89915 |================================================================ b . 4.00940 |================================================================== c . 3.91465 |================================================================ ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 256.46 |=================================================================== b . 249.41 |================================================================= c . 255.45 |=================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 6.62817 |================================================================== b . 6.63930 |================================================================== c . 6.63286 |================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 150.87 |=================================================================== b . 150.61 |=================================================================== c . 150.76 |=================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 226.99 |================================================================= b . 233.89 |=================================================================== c . 233.72 |=================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 4.40374 |================================================================== b . 4.27347 |================================================================ c . 4.27680 |================================================================ ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 327.60 |=================================================================== b . 325.39 |================================================================== c . 328.58 |=================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 3.05091 |================================================================== b . 3.07148 |================================================================== c . 3.04188 |================================================================= ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.494480 |=============================================================== b . 0.416435 |===================================================== c . 0.507355 |================================================================= ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 2022.32 |======================================================== b . 2401.33 |================================================================== c . 1971.00 |====================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 0.717695 |================================================================ b . 0.732657 |================================================================= c . 0.705358 |=============================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1393.35 |================================================================= b . 1364.89 |================================================================ c . 1417.72 |================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 8.62386 |================================================================ b . 8.85860 |================================================================== c . 8.76598 |================================================================= ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 115.96 |=================================================================== b . 112.88 |================================================================= c . 114.08 |================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 15.49 |==================================================================== b . 15.22 |=================================================================== c . 15.30 |=================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 64.56 |=================================================================== b . 65.68 |==================================================================== c . 65.34 |==================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 96.31 |============================================================== b . 100.00 |================================================================= c . 103.36 |=================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 10.38110 |================================================================= b . 9.99778 |=============================================================== c . 9.67201 |============================================================= ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 118.02 |=================================================================== b . 117.67 |=================================================================== c . 116.64 |================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 8.47108 |================================================================= b . 8.49614 |================================================================= c . 8.57100 |================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 26.94 |=================================================================== b . 27.27 |==================================================================== c . 27.22 |==================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 37.12 |==================================================================== b . 36.67 |=================================================================== c . 36.74 |=================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 43.39 |==================================================================== b . 43.57 |==================================================================== c . 42.97 |=================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 23.04 |=================================================================== b . 22.95 |=================================================================== c . 23.27 |==================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.215070 |===================================================== b . 0.196218 |================================================ c . 0.266239 |================================================================= ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 4649.64 |============================================================ b . 5096.38 |================================================================== c . 3756.02 |================================================= ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 0.286070 |============================================================= b . 0.302547 |================================================================= c . 0.285666 |============================================================= ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 3495.65 |================================================================== b . 3305.27 |============================================================== c . 3500.58 |================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 21.64 |==================================================================== b . 20.17 |=============================================================== c . 20.97 |================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 46.21 |=============================================================== b . 49.57 |==================================================================== c . 47.68 |================================================================= ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 33.91 |================================================================== b . 35.15 |==================================================================== c . 33.04 |================================================================ ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 29.49 |================================================================== b . 28.45 |================================================================ c . 30.26 |==================================================================== simdjson 3.10 Throughput Test: Kostya GB/s > Higher Is Better a . 3.54 |===================================================================== b . 3.51 |==================================================================== c . 3.48 |==================================================================== simdjson 3.10 Throughput Test: TopTweet GB/s > Higher Is Better a . 7.68 |===================================================================== b . 7.70 |===================================================================== c . 7.68 |===================================================================== simdjson 3.10 Throughput Test: LargeRandom GB/s > Higher Is Better a . 1.23 |===================================================================== b . 1.23 |===================================================================== c . 1.22 |==================================================================== simdjson 3.10 Throughput Test: PartialTweets GB/s > Higher Is Better a . 7.42 |===================================================================== b . 7.45 |===================================================================== c . 7.44 |===================================================================== simdjson 3.10 Throughput Test: DistinctUserID GB/s > Higher Is Better a . 8.08 |===================================================================== b . 8.10 |===================================================================== c . 8.07 |===================================================================== SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 1.451 |==================================================================== b . 1.452 |==================================================================== c . 1.455 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 5.900 |=================================================================== b . 5.909 |==================================================================== c . 5.945 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 13.42 |=================================================================== b . 13.41 |=================================================================== c . 13.69 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 48.64 |=================================================================== b . 48.40 |=================================================================== c . 49.33 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 5.364 |==================================================================== b . 5.372 |==================================================================== c . 5.356 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 20.22 |==================================================================== b . 20.19 |==================================================================== c . 20.14 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 48.24 |==================================================================== b . 48.27 |==================================================================== c . 48.21 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 219.80 |=================================================================== b . 219.38 |=================================================================== c . 219.56 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 0.230 |==================================================================== b . 0.230 |==================================================================== c . 0.229 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 1.145 |==================================================================== b . 1.144 |==================================================================== c . 1.146 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 1.642 |==================================================================== b . 1.634 |==================================================================== c . 1.637 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 4.098 |================================================================== b . 4.219 |==================================================================== c . 4.226 |==================================================================== XNNPACK 2cd86b Model: FP32MobileNetV2 us < Lower Is Better a . 2758 |===================================================================== b . 2757 |===================================================================== c . 2766 |===================================================================== XNNPACK 2cd86b Model: FP32MobileNetV3Large us < Lower Is Better a . 3116 |===================================================================== b . 3091 |==================================================================== c . 3096 |===================================================================== XNNPACK 2cd86b Model: FP32MobileNetV3Small us < Lower Is Better a . 1068 |===================================================================== b . 1067 |===================================================================== c . 1069 |===================================================================== XNNPACK 2cd86b Model: FP16MobileNetV2 us < Lower Is Better a . 4696 |===================================================================== b . 4678 |===================================================================== c . 4685 |===================================================================== XNNPACK 2cd86b Model: FP16MobileNetV3Large us < Lower Is Better a . 4178 |===================================================================== b . 4203 |===================================================================== c . 4164 |==================================================================== XNNPACK 2cd86b Model: FP16MobileNetV3Small us < Lower Is Better a . 1402 |===================================================================== b . 1403 |===================================================================== c . 1398 |===================================================================== XNNPACK 2cd86b Model: QU8MobileNetV2 us < Lower Is Better a . 2359 |===================================================================== b . 2361 |===================================================================== c . 2353 |===================================================================== XNNPACK 2cd86b Model: QU8MobileNetV3Large us < Lower Is Better a . 2149 |===================================================================== b . 2161 |===================================================================== c . 2153 |===================================================================== XNNPACK 2cd86b Model: QU8MobileNetV3Small us < Lower Is Better a . 814 |====================================================================== b . 814 |====================================================================== c . 806 |===================================================================== Y-Cruncher 0.8.5 Pi Digits To Calculate: 1B Seconds < Lower Is Better a . 58.00 |==================================================================== b . 57.94 |==================================================================== c . 57.86 |==================================================================== Y-Cruncher 0.8.5 Pi Digits To Calculate: 500M Seconds < Lower Is Better a . 25.73 |==================================================================== b . 25.76 |==================================================================== c . 25.75 |====================================================================