ssss Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1662 BIOS) and XFX AMD Radeon RX 7900 XTX 24GB on Ubuntu 24.04 via the Phoronix Test Suite. a: Processor: Intel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads), Motherboard: ASUS PRIME Z790-P WIFI (1662 BIOS), Chipset: Intel Raptor Lake-S PCH, Memory: 2 x 16GB DDR5-6000MT/s Corsair CMK32GX5M2B6000C36, Disk: Western Digital WD_BLACK SN850X 2000GB, Graphics: XFX AMD Radeon RX 7900 XTX 24GB, Audio: Realtek ALC897, Monitor: ASUS VP28U OS: Ubuntu 24.04, Kernel: 6.10.0-061000rc6daily20240706-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11 + Wayland, OpenGL: 4.6 Mesa 24.2~git2407080600.801ed4~oibaf~n (git-801ed4d 2024-07-08 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3840x2160 b: Processor: Intel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads), Motherboard: ASUS PRIME Z790-P WIFI (1662 BIOS), Chipset: Intel Raptor Lake-S PCH, Memory: 2 x 16GB DDR5-6000MT/s Corsair CMK32GX5M2B6000C36, Disk: Western Digital WD_BLACK SN850X 2000GB, Graphics: XFX AMD Radeon RX 7900 XTX 24GB, Audio: Realtek ALC897, Monitor: ASUS VP28U OS: Ubuntu 24.04, Kernel: 6.10.0-061000rc6daily20240706-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11 + Wayland, OpenGL: 4.6 Mesa 24.2~git2407080600.801ed4~oibaf~n (git-801ed4d 2024-07-08 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3840x2160 c: Processor: Intel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads), Motherboard: ASUS PRIME Z790-P WIFI (1662 BIOS), Chipset: Intel Raptor Lake-S PCH, Memory: 2 x 16GB DDR5-6000MT/s Corsair CMK32GX5M2B6000C36, Disk: Western Digital WD_BLACK SN850X 2000GB, Graphics: XFX AMD Radeon RX 7900 XTX 24GB, Audio: Realtek ALC897, Monitor: ASUS VP28U OS: Ubuntu 24.04, Kernel: 6.10.0-061000rc6daily20240706-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11 + Wayland, OpenGL: 4.6 Mesa 24.2~git2407080600.801ed4~oibaf~n (git-801ed4d 2024-07-08 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3840x2160 d: Processor: Intel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads), Motherboard: ASUS PRIME Z790-P WIFI (1662 BIOS), Chipset: Intel Raptor Lake-S PCH, Memory: 2 x 16GB DDR5-6000MT/s Corsair CMK32GX5M2B6000C36, Disk: Western Digital WD_BLACK SN850X 2000GB, Graphics: XFX AMD Radeon RX 7900 XTX 24GB, Audio: Realtek ALC897, Monitor: ASUS VP28U OS: Ubuntu 24.04, Kernel: 6.10.0-061000rc6daily20240706-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11 + Wayland, OpenGL: 4.6 Mesa 24.2~git2407080600.801ed4~oibaf~n (git-801ed4d 2024-07-08 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3840x2160 e: Processor: Intel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads), Motherboard: ASUS PRIME Z790-P WIFI (1662 BIOS), Chipset: Intel Raptor Lake-S PCH, Memory: 2 x 16GB DDR5-6000MT/s Corsair CMK32GX5M2B6000C36, Disk: Western Digital WD_BLACK SN850X 2000GB, Graphics: XFX AMD Radeon RX 7900 XTX 24GB, Audio: Realtek ALC897, Monitor: ASUS VP28U OS: Ubuntu 24.04, Kernel: 6.10.0-061000rc6daily20240706-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11 + Wayland, OpenGL: 4.6 Mesa 24.2~git2407080600.801ed4~oibaf~n (git-801ed4d 2024-07-08 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 3840x2160 ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 118.38 |=============================================================== b . 125.51 |=================================================================== c . 114.76 |============================================================= d . 120.67 |================================================================ e . 121.40 |================================================================= ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 176.40 |=================================================================== b . 176.61 |=================================================================== c . 176.61 |=================================================================== d . 176.04 |=================================================================== e . 176.42 |=================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 14.02 |============================================================ b . 14.93 |================================================================ c . 15.80 |==================================================================== d . 15.08 |================================================================= e . 13.50 |========================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 17.24 |================================================================== b . 17.64 |==================================================================== c . 17.14 |================================================================== d . 17.41 |=================================================================== e . 17.38 |=================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 144.92 |================================================================== b . 141.47 |================================================================= c . 136.25 |============================================================== d . 146.63 |=================================================================== e . 138.84 |=============================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 209.09 |=================================================================== b . 209.05 |=================================================================== c . 209.11 |=================================================================== d . 209.89 |=================================================================== e . 209.31 |=================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 17.35 |================================================================= b . 17.71 |================================================================== c . 16.36 |============================================================= d . 17.46 |================================================================= e . 18.16 |==================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 21.76 |=================================================================== b . 21.89 |=================================================================== c . 22.01 |==================================================================== d . 22.08 |==================================================================== e . 21.62 |=================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 313.19 |=================================================================== b . 310.17 |================================================================== c . 312.62 |=================================================================== d . 313.70 |=================================================================== e . 307.62 |================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 972.40 |=================================================================== b . 965.84 |=================================================================== c . 972.91 |=================================================================== d . 965.12 |================================================================== e . 960.34 |================================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 2.03157 |================================================================= b . 1.98904 |=============================================================== c . 2.01980 |================================================================ d . 2.02126 |================================================================ e . 2.07055 |================================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 2.80524 |================================================================== b . 2.81097 |================================================================== c . 2.79435 |================================================================== d . 2.80968 |================================================================== e . 2.79826 |================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 9.63175 |================================================================== b . 9.05554 |============================================================== c . 9.19721 |=============================================================== d . 8.87924 |============================================================= e . 9.64652 |================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 9.49206 |================================================================== b . 9.47765 |================================================================== c . 9.49620 |================================================================== d . 9.49065 |================================================================== e . 9.49647 |================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 194.15 |================================================================= b . 200.33 |=================================================================== c . 200.77 |=================================================================== d . 196.61 |================================================================== e . 192.69 |================================================================ ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 315.35 |=================================================================== b . 313.47 |================================================================== c . 315.74 |=================================================================== d . 316.77 |=================================================================== e . 312.49 |================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 96.19 |==================================================================== b . 95.60 |==================================================================== c . 94.84 |=================================================================== d . 94.43 |=================================================================== e . 93.55 |================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 87.00 |==================================================================== b . 86.61 |=================================================================== c . 87.03 |==================================================================== d . 87.24 |==================================================================== e . 87.43 |==================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 1.007155 |=============================================================== b . 0.978795 |============================================================= c . 1.041450 |================================================================= d . 0.999466 |============================================================== e . 0.967830 |============================================================ ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 1.11819 |================================================================= b . 1.12367 |================================================================== c . 1.12825 |================================================================== d . 1.12563 |================================================================== e . 1.12443 |================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 46.64 |==================================================================== b . 45.26 |================================================================== c . 46.72 |==================================================================== d . 45.58 |================================================================== e . 46.10 |=================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 59.54 |=================================================================== b . 60.02 |==================================================================== c . 58.61 |================================================================== d . 59.17 |=================================================================== e . 60.20 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 7.451 |==================================================================== b . 7.478 |==================================================================== c . 7.442 |==================================================================== d . 7.474 |==================================================================== e . 7.446 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 27.27 |=================================================================== b . 27.59 |==================================================================== c . 27.39 |==================================================================== d . 27.58 |==================================================================== e . 27.31 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 58.88 |=================================================================== b . 59.44 |=================================================================== c . 59.24 |=================================================================== d . 60.08 |==================================================================== e . 58.83 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 227.36 |=================================================================== b . 228.32 |=================================================================== c . 227.36 |=================================================================== d . 226.98 |=================================================================== e . 227.70 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 23.69 |==================================================================== b . 23.74 |==================================================================== c . 23.81 |==================================================================== d . 23.70 |==================================================================== e . 23.64 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 82.46 |=================================================================== b . 83.75 |==================================================================== c . 82.45 |=================================================================== d . 83.75 |==================================================================== e . 82.22 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 187.97 |================================================================== b . 188.99 |================================================================== c . 190.83 |=================================================================== d . 187.63 |================================================================== e . 190.47 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 688.06 |================================================================== b . 693.23 |================================================================== c . 696.74 |=================================================================== d . 697.16 |=================================================================== e . 700.09 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 1.330 |==================================================================== b . 1.324 |=================================================================== c . 1.335 |==================================================================== d . 1.330 |==================================================================== e . 1.320 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 5.937 |=================================================================== b . 5.880 |=================================================================== c . 5.919 |=================================================================== d . 5.991 |==================================================================== e . 5.906 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 8.359 |================================================================== b . 8.501 |=================================================================== c . 8.485 |=================================================================== d . 8.514 |==================================================================== e . 8.568 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 18.00 |==================================================================== b . 18.02 |==================================================================== c . 18.02 |==================================================================== d . 17.98 |==================================================================== e . 18.03 |==================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 8.44437 |================================================================ b . 7.96304 |============================================================ c . 8.70901 |================================================================== d . 8.28234 |=============================================================== e . 8.23308 |============================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 5.66499 |================================================================== b . 5.65885 |================================================================== c . 5.65828 |================================================================== d . 5.67701 |================================================================== e . 5.66489 |================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 71.37 |================================================================= b . 66.98 |============================================================= c . 63.30 |========================================================== d . 66.31 |============================================================= e . 74.10 |==================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 58.02 |==================================================================== b . 56.69 |================================================================== c . 58.36 |==================================================================== d . 57.45 |=================================================================== e . 57.52 |=================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 6.90233 |============================================================== b . 7.06699 |================================================================ c . 7.33799 |================================================================== d . 6.81870 |============================================================= e . 7.20087 |================================================================= ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.78134 |================================================================== b . 4.78224 |================================================================== c . 4.78108 |================================================================== d . 4.76332 |================================================================== e . 4.77650 |================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 57.63 |================================================================ b . 56.47 |=============================================================== c . 61.12 |==================================================================== d . 57.28 |================================================================ e . 55.08 |============================================================= ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 45.97 |==================================================================== b . 45.68 |=================================================================== c . 45.42 |=================================================================== d . 45.29 |=================================================================== e . 46.25 |==================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 3.19209 |================================================================= b . 3.22287 |================================================================= c . 3.19755 |================================================================= d . 3.18653 |================================================================= e . 3.24962 |================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1.02774 |================================================================= b . 1.03469 |================================================================== c . 1.02726 |================================================================= d . 1.03550 |================================================================== e . 1.04055 |================================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 492.38 |================================================================== b . 502.75 |=================================================================== c . 495.10 |================================================================== d . 494.74 |================================================================== e . 482.96 |================================================================ ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 356.48 |=================================================================== b . 355.75 |=================================================================== c . 357.86 |=================================================================== d . 355.91 |=================================================================== e . 357.36 |=================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 104.50 |============================================================== b . 110.43 |================================================================== c . 108.73 |================================================================= d . 112.62 |=================================================================== e . 103.66 |============================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 105.35 |=================================================================== b . 105.51 |=================================================================== c . 105.30 |=================================================================== d . 105.37 |=================================================================== e . 105.30 |=================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 5.15165 |================================================================== b . 4.99090 |=============================================================== c . 4.97989 |=============================================================== d . 5.08514 |================================================================= e . 5.18875 |================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 3.17050 |================================================================= b . 3.18971 |================================================================== c . 3.16663 |================================================================= d . 3.15625 |================================================================= e . 3.19967 |================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 10.40 |================================================================== b . 10.46 |=================================================================== c . 10.54 |=================================================================== d . 10.59 |=================================================================== e . 10.69 |==================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 11.49 |==================================================================== b . 11.55 |==================================================================== c . 11.49 |==================================================================== d . 11.46 |=================================================================== e . 11.44 |=================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 993.33 |=============================================================== b . 1021.66 |================================================================= c . 960.20 |============================================================= d . 1000.53 |================================================================ e . 1033.24 |================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 894.33 |=================================================================== b . 889.88 |=================================================================== c . 886.33 |================================================================== d . 888.39 |=================================================================== e . 889.34 |=================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 21.44 |================================================================== b . 22.09 |==================================================================== c . 21.40 |================================================================== d . 21.94 |==================================================================== e . 21.69 |=================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 16.79 |=================================================================== b . 16.66 |================================================================== c . 17.06 |==================================================================== d . 16.90 |=================================================================== e . 16.61 |==================================================================