onnx 119 AMD Ryzen Threadripper 7980X 64-Cores testing with a System76 Thelio Major (FA Z5 BIOS) and AMD Radeon Pro W7900 on Ubuntu 24.10 via the Phoronix Test Suite. a: Processor: AMD Ryzen Threadripper 7980X 64-Cores @ 7.79GHz (64 Cores / 128 Threads), Motherboard: System76 Thelio Major (FA Z5 BIOS), Chipset: AMD Device 14a4, Memory: 4 x 32GB DDR5-4800MT/s Micron MTC20F1045S1RC48BA2, Disk: 1000GB CT1000T700SSD5, Graphics: AMD Radeon Pro W7900, Audio: AMD Device 14cc, Monitor: DELL P2415Q, Network: Aquantia AQC113C NBase-T/IEEE + Realtek RTL8125 2.5GbE + Intel Wi-Fi 6E OS: Ubuntu 24.10, Kernel: 6.8.0-31-generic (x86_64), Desktop: GNOME Shell, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.0.9-0ubuntu2 (LLVM 17.0.6 DRM 3.57), Compiler: GCC 14.2.0, File-System: ext4, Screen Resolution: 1920x1200 b: Processor: AMD Ryzen Threadripper 7980X 64-Cores @ 7.79GHz (64 Cores / 128 Threads), Motherboard: System76 Thelio Major (FA Z5 BIOS), Chipset: AMD Device 14a4, Memory: 4 x 32GB DDR5-4800MT/s Micron MTC20F1045S1RC48BA2, Disk: 1000GB CT1000T700SSD5, Graphics: AMD Radeon Pro W7900, Audio: AMD Device 14cc, Monitor: DELL P2415Q, Network: Aquantia AQC113C NBase-T/IEEE + Realtek RTL8125 2.5GbE + Intel Wi-Fi 6E OS: Ubuntu 24.10, Kernel: 6.8.0-31-generic (x86_64), Desktop: GNOME Shell, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.0.9-0ubuntu2 (LLVM 17.0.6 DRM 3.57), Compiler: GCC 14.2.0, File-System: ext4, Screen Resolution: 1920x1200 c: Processor: AMD Ryzen Threadripper 7980X 64-Cores @ 7.79GHz (64 Cores / 128 Threads), Motherboard: System76 Thelio Major (FA Z5 BIOS), Chipset: AMD Device 14a4, Memory: 4 x 32GB DDR5-4800MT/s Micron MTC20F1045S1RC48BA2, Disk: 1000GB CT1000T700SSD5, Graphics: AMD Radeon Pro W7900, Audio: AMD Device 14cc, Monitor: DELL P2415Q, Network: Aquantia AQC113C NBase-T/IEEE + Realtek RTL8125 2.5GbE + Intel Wi-Fi 6E OS: Ubuntu 24.10, Kernel: 6.8.0-31-generic (x86_64), Desktop: GNOME Shell, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.0.9-0ubuntu2 (LLVM 17.0.6 DRM 3.57), Compiler: GCC 14.2.0, File-System: ext4, Screen Resolution: 1920x1200 d: Processor: AMD Ryzen Threadripper 7980X 64-Cores @ 7.79GHz (64 Cores / 128 Threads), Motherboard: System76 Thelio Major (FA Z5 BIOS), Chipset: AMD Device 14a4, Memory: 4 x 32GB DDR5-4800MT/s Micron MTC20F1045S1RC48BA2, Disk: 1000GB CT1000T700SSD5, Graphics: AMD Radeon Pro W7900, Audio: AMD Device 14cc, Monitor: DELL P2415Q, Network: Aquantia AQC113C NBase-T/IEEE + Realtek RTL8125 2.5GbE + Intel Wi-Fi 6E OS: Ubuntu 24.10, Kernel: 6.8.0-31-generic (x86_64), Desktop: GNOME Shell, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.0.9-0ubuntu2 (LLVM 17.0.6 DRM 3.57), Compiler: GCC 14.2.0, File-System: ext4, Screen Resolution: 1920x1200 e: Processor: AMD Ryzen Threadripper 7980X 64-Cores @ 7.79GHz (64 Cores / 128 Threads), Motherboard: System76 Thelio Major (FA Z5 BIOS), Chipset: AMD Device 14a4, Memory: 4 x 32GB DDR5-4800MT/s Micron MTC20F1045S1RC48BA2, Disk: 1000GB CT1000T700SSD5, Graphics: AMD Radeon Pro W7900, Audio: AMD Device 14cc, Monitor: DELL P2415Q, Network: Aquantia AQC113C NBase-T/IEEE + Realtek RTL8125 2.5GbE + Intel Wi-Fi 6E OS: Ubuntu 24.10, Kernel: 6.8.0-31-generic (x86_64), Desktop: GNOME Shell, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.0.9-0ubuntu2 (LLVM 17.0.6 DRM 3.57), Compiler: GCC 14.2.0, File-System: ext4, Screen Resolution: 1920x1200 SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 13.40 |=================================================================== b . 13.35 |=================================================================== c . 13.53 |==================================================================== d . 13.38 |=================================================================== e . 13.41 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 45.61 |==================================================================== b . 45.34 |==================================================================== c . 45.66 |==================================================================== d . 45.17 |=================================================================== e . 45.41 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 96.21 |=================================================================== b . 96.42 |==================================================================== c . 96.06 |=================================================================== d . 97.05 |==================================================================== e . 96.78 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 231.11 |================================================================== b . 233.25 |=================================================================== c . 232.40 |=================================================================== d . 228.83 |================================================================== e . 228.97 |================================================================== SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 36.61 |==================================================================== b . 36.55 |==================================================================== c . 36.66 |==================================================================== d . 36.71 |==================================================================== e . 36.46 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 119.22 |=================================================================== b . 119.23 |=================================================================== c . 119.77 |=================================================================== d . 118.80 |================================================================== e . 119.92 |=================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 271.68 |=================================================================== b . 269.88 |=================================================================== c . 266.01 |================================================================== d . 269.43 |================================================================== e . 268.76 |================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 740.00 |=============================================================== b . 740.79 |=============================================================== c . 764.00 |================================================================= d . 783.12 |=================================================================== e . 749.26 |================================================================ SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 1.864 |==================================================================== b . 1.857 |=================================================================== c . 1.867 |==================================================================== d . 1.874 |==================================================================== e . 1.867 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 7.775 |==================================================================== b . 7.801 |==================================================================== c . 7.811 |==================================================================== d . 7.809 |==================================================================== e . 7.765 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 10.84 |=================================================================== b . 10.83 |=================================================================== c . 10.85 |=================================================================== d . 10.87 |=================================================================== e . 10.97 |==================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 19.52 |==================================================================== b . 19.42 |==================================================================== c . 19.40 |==================================================================== d . 19.44 |==================================================================== e . 19.44 |==================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 160.71 |================================================================== b . 161.41 |=================================================================== c . 162.18 |=================================================================== d . 161.86 |=================================================================== e . 162.25 |=================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 105.04 |================================================================= b . 105.79 |================================================================== c . 105.59 |================================================================== d . 105.15 |================================================================= e . 107.72 |=================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 5.27246 |================================================================== b . 5.12789 |================================================================ c . 5.30486 |================================================================== d . 5.00717 |============================================================== e . 5.11842 |================================================================ ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 8.76007 |================================================================ b . 8.71251 |=============================================================== c . 9.05619 |================================================================== d . 8.79279 |================================================================ e . 9.08788 |================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 53.41 |=================================================================== b . 54.58 |==================================================================== c . 50.74 |=============================================================== d . 54.30 |==================================================================== e . 54.42 |==================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 81.78 |=================================================================== b . 81.42 |=================================================================== c . 82.50 |==================================================================== d . 81.67 |=================================================================== e . 78.16 |================================================================ ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 335.50 |================================================================== b . 338.93 |=================================================================== c . 339.18 |=================================================================== d . 335.86 |================================================================== e . 338.70 |=================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 140.96 |================================================================== b . 142.11 |=================================================================== c . 141.23 |=================================================================== d . 138.16 |================================================================= e . 138.61 |================================================================= ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 6.76854 |=============================================================== b . 6.86262 |================================================================ c . 6.91740 |================================================================ d . 7.10593 |================================================================== e . 7.02121 |================================================================= ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 12.67 |================================================================= b . 12.74 |================================================================== c . 13.18 |==================================================================== d . 13.00 |=================================================================== e . 12.74 |================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 204.48 |================================================================ b . 200.84 |=============================================================== c . 178.13 |======================================================== d . 212.81 |=================================================================== e . 210.62 |================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 454.72 |================================================================== b . 460.02 |=================================================================== c . 442.12 |================================================================ d . 419.42 |============================================================= e . 440.35 |================================================================ ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 1.10358 |================================================================ b . 1.09825 |=============================================================== c . 1.09469 |=============================================================== d . 1.14486 |================================================================== e . 1.12456 |================================================================= ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 3.92686 |================================================================= b . 4.00404 |================================================================== c . 3.43115 |========================================================= d . 3.47069 |========================================================= e . 3.61108 |============================================================ ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 12.60 |=================================================================== b . 12.48 |================================================================== c . 12.39 |================================================================== d . 12.61 |=================================================================== e . 12.85 |==================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 38.35 |=================================================================== b . 36.97 |================================================================ c . 38.78 |=================================================================== d . 39.21 |==================================================================== e . 36.68 |================================================================ ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 74.21 |==================================================================== b . 74.36 |==================================================================== c . 73.12 |=================================================================== d . 73.47 |=================================================================== e . 74.33 |==================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 201.10 |================================================================== b . 201.42 |================================================================== c . 194.34 |================================================================ d . 203.84 |=================================================================== e . 202.69 |=================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 131.31 |=================================================================== b . 130.49 |================================================================== c . 130.77 |================================================================== d . 132.22 |=================================================================== e . 131.27 |=================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 100.59 |================================================================ b . 100.68 |================================================================ c . 100.88 |================================================================ d . 105.02 |=================================================================== e . 99.41 |=============================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 1.60987 |================================================================== b . 1.59999 |================================================================= c . 1.61916 |================================================================== d . 1.58526 |================================================================= e . 1.61010 |================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 2.68248 |================================================================== b . 2.70145 |================================================================== c . 2.69105 |================================================================== d . 2.66966 |================================================================= e . 2.69413 |================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 28.73 |==================================================================== b . 28.88 |==================================================================== c . 27.65 |================================================================= d . 27.61 |================================================================= e . 27.76 |================================================================= ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 44.56 |=================================================================== b . 44.46 |=================================================================== c . 44.04 |================================================================== d . 45.29 |==================================================================== e . 44.11 |================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 6.21628 |================================================================== b . 6.18911 |================================================================== c . 6.15989 |================================================================= d . 6.17185 |================================================================== e . 6.15756 |================================================================= ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 9.51832 |================================================================== b . 9.45128 |================================================================== c . 9.46853 |================================================================== d . 9.50891 |================================================================== e . 9.28112 |================================================================ ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 189.68 |================================================================ b . 195.13 |================================================================= c . 188.50 |=============================================================== d . 199.71 |=================================================================== e . 195.37 |================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 114.15 |=================================================================== b . 114.83 |=================================================================== c . 110.42 |================================================================ d . 113.73 |================================================================== e . 110.03 |================================================================ ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 18.73 |================================================================= b . 18.33 |=============================================================== c . 19.71 |==================================================================== d . 18.41 |================================================================ e . 18.38 |=============================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 12.23 |================================================================= b . 12.29 |================================================================= c . 12.12 |================================================================ d . 12.24 |================================================================= e . 12.79 |==================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 2.97938 |================================================================== b . 2.94936 |================================================================= c . 2.94690 |================================================================= d . 2.97608 |================================================================== e . 2.95124 |================================================================= ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 7.09442 |================================================================= b . 7.03694 |================================================================ c . 7.08004 |================================================================= d . 7.23734 |================================================================== e . 7.21415 |================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 147.81 |=================================================================== b . 145.78 |================================================================== c . 144.56 |================================================================== d . 140.72 |================================================================ e . 142.42 |================================================================= ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 78.92 |==================================================================== b . 78.57 |==================================================================== c . 75.87 |================================================================= d . 76.94 |================================================================== e . 78.47 |==================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 4.88925 |========================================================= b . 4.99490 |=========================================================== c . 5.61236 |================================================================== d . 4.69742 |======================================================= e . 4.74627 |======================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 2.19841 |============================================================= b . 2.17339 |============================================================ c . 2.26126 |=============================================================== d . 2.38355 |================================================================== e . 2.27019 |=============================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 908.30 |=================================================================== b . 913.54 |=================================================================== c . 913.49 |=================================================================== d . 873.47 |================================================================ e . 889.23 |================================================================= ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 257.99 |=========================================================== b . 252.34 |========================================================== c . 291.44 |=================================================================== d . 288.12 |================================================================== e . 276.92 |================================================================ ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 79.34 |=================================================================== b . 80.16 |==================================================================== c . 80.70 |==================================================================== d . 79.29 |=================================================================== e . 77.85 |================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 26.08 |================================================================= b . 27.06 |==================================================================== c . 25.78 |================================================================ d . 25.50 |================================================================ e . 27.26 |==================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 13.47 |=================================================================== b . 13.45 |=================================================================== c . 13.67 |==================================================================== d . 13.61 |==================================================================== e . 13.45 |=================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.97187 |================================================================ b . 4.96437 |================================================================ c . 5.14489 |================================================================== d . 4.90503 |=============================================================== e . 4.93285 |=============================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 7.61417 |================================================================== b . 7.66242 |================================================================== c . 7.64590 |================================================================== d . 7.56180 |================================================================= e . 7.61659 |================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 9.94497 |================================================================ b . 9.93654 |================================================================ c . 9.91284 |================================================================ d . 9.52165 |============================================================== e . 10.05920 |================================================================= ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 621.23 |================================================================== b . 625.06 |================================================================== c . 617.60 |================================================================== d . 630.81 |=================================================================== e . 621.08 |================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 372.79 |=================================================================== b . 370.18 |================================================================== c . 371.60 |================================================================== d . 374.58 |=================================================================== e . 371.18 |================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 34.83 |================================================================= b . 34.65 |================================================================= c . 36.16 |==================================================================== d . 36.21 |==================================================================== e . 36.02 |==================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 22.44 |=================================================================== b . 22.49 |=================================================================== c . 22.70 |==================================================================== d . 22.08 |================================================================== e . 22.67 |====================================================================