newa AMD Ryzen AI 9 HX 370 testing with a ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS) and AMD Radeon 512MB on Ubuntu 24.04 via the Phoronix Test Suite. a: Processor: AMD Ryzen AI 9 HX 370 @ 4.37GHz (12 Cores / 24 Threads), Motherboard: ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS), Chipset: AMD Device 1507, Memory: 4 x 8GB LPDDR5-7500MT/s Samsung K3KL9L90CM-MGCT, Disk: 1024GB MTFDKBA1T0QFM-1BD1AABGB, Graphics: AMD Radeon 512MB, Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK Device 7925 OS: Ubuntu 24.04, Kernel: 6.10.0-phx (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11 + Wayland, OpenGL: 4.6 Mesa 24.3~git2407230600.74b4c9~oibaf~n (git-74b4c91 2024-07-23 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2880x1800 b: Processor: AMD Ryzen AI 9 HX 370 @ 4.37GHz (12 Cores / 24 Threads), Motherboard: ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS), Chipset: AMD Device 1507, Memory: 4 x 8GB LPDDR5-7500MT/s Samsung K3KL9L90CM-MGCT, Disk: 1024GB MTFDKBA1T0QFM-1BD1AABGB, Graphics: AMD Radeon 512MB, Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK Device 7925 OS: Ubuntu 24.04, Kernel: 6.10.0-phx (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11 + Wayland, OpenGL: 4.6 Mesa 24.3~git2407230600.74b4c9~oibaf~n (git-74b4c91 2024-07-23 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2880x1800 c: Processor: AMD Ryzen AI 9 HX 370 @ 4.37GHz (12 Cores / 24 Threads), Motherboard: ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS), Chipset: AMD Device 1507, Memory: 4 x 8GB LPDDR5-7500MT/s Samsung K3KL9L90CM-MGCT, Disk: 1024GB MTFDKBA1T0QFM-1BD1AABGB, Graphics: AMD Radeon 512MB, Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK Device 7925 OS: Ubuntu 24.04, Kernel: 6.10.0-phx (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11 + Wayland, OpenGL: 4.6 Mesa 24.3~git2407230600.74b4c9~oibaf~n (git-74b4c91 2024-07-23 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2880x1800 d: Processor: AMD Ryzen AI 9 HX 370 @ 4.37GHz (12 Cores / 24 Threads), Motherboard: ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS), Chipset: AMD Device 1507, Memory: 4 x 8GB LPDDR5-7500MT/s Samsung K3KL9L90CM-MGCT, Disk: 1024GB MTFDKBA1T0QFM-1BD1AABGB, Graphics: AMD Radeon 512MB, Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK Device 7925 OS: Ubuntu 24.04, Kernel: 6.10.0-phx (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server 1.21.1.11 + Wayland, OpenGL: 4.6 Mesa 24.3~git2407230600.74b4c9~oibaf~n (git-74b4c91 2024-07-23 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2880x1800 SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 3.309 |================================================================= b . 3.463 |==================================================================== c . 3.151 |============================================================== d . 3.234 |================================================================ SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 13.23 |=============================================================== b . 14.35 |==================================================================== c . 11.83 |======================================================== d . 11.90 |======================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 27.66 |========================================================== b . 32.39 |==================================================================== c . 25.74 |====================================================== d . 26.04 |======================================================= SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 106.95 |============================================================ b . 120.17 |=================================================================== c . 96.95 |====================================================== d . 98.81 |======================================================= SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 11.37 |================================================================ b . 12.12 |==================================================================== c . 10.74 |============================================================ d . 10.85 |============================================================= SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 42.75 |================================================================ b . 45.54 |==================================================================== c . 40.07 |============================================================ d . 43.02 |================================================================ SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 103.50 |============================================================== b . 112.10 |=================================================================== c . 94.11 |======================================================== d . 97.18 |========================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 440.23 |============================================================= b . 481.65 |=================================================================== c . 413.09 |========================================================= d . 430.24 |============================================================ SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 0.543 |================================================================== b . 0.561 |==================================================================== c . 0.503 |============================================================= d . 0.514 |============================================================== SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 2.697 |==================================================================== b . 2.601 |================================================================== c . 2.488 |=============================================================== d . 2.533 |================================================================ SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 4.011 |==================================================================== b . 3.854 |================================================================= c . 3.729 |=============================================================== d . 3.906 |================================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 10.675 |=================================================================== b . 9.634 |============================================================ c . 9.588 |============================================================ d . 10.036 |=============================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 102.72 |=================================================================== b . 95.52 |============================================================== c . 97.83 |================================================================ d . 99.04 |================================================================= ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 122.89 |=================================================================== b . 110.41 |============================================================ c . 118.04 |================================================================ d . 118.38 |================================================================= ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 3.61793 |================================================================== b . 3.30690 |============================================================ c . 3.58762 |================================================================= d . 3.55341 |================================================================= ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 5.70412 |================================================================ b . 4.89908 |======================================================= c . 5.17554 |========================================================== d . 5.90294 |================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 42.66 |==================================================================== b . 37.57 |============================================================ c . 38.16 |============================================================= d . 42.36 |==================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 86.25 |==================================================================== b . 78.85 |============================================================== c . 76.88 |============================================================= d . 84.93 |=================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 146.15 |=================================================================== b . 129.19 |=========================================================== c . 135.74 |============================================================== d . 135.93 |============================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 161.64 |=================================================================== b . 162.32 |=================================================================== c . 160.71 |================================================================== d . 160.62 |================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 5.04160 |================================================================== b . 4.72998 |============================================================== c . 4.73966 |============================================================== d . 4.88633 |================================================================ ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 7.59373 |================================================================== b . 6.66153 |========================================================== c . 6.69892 |========================================================== d . 7.21263 |=============================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 136.32 |=================================================================== b . 131.27 |================================================================= c . 120.85 |=========================================================== d . 135.84 |=================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 502.18 |================================================================== b . 510.92 |=================================================================== c . 462.59 |============================================================= d . 506.74 |================================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.921778 |================================================================= b . 0.865881 |============================================================= c . 0.812788 |========================================================= d . 0.902388 |================================================================ ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 1.13181 |================================================================== b . 1.03114 |============================================================ c . 1.02034 |=========================================================== d . 1.10029 |================================================================ ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 10.94 |==================================================================== b . 10.22 |================================================================ c . 10.03 |============================================================== d . 10.23 |================================================================ ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 19.47 |==================================================================== b . 19.16 |=================================================================== c . 17.31 |============================================================ d . 19.31 |=================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 69.92 |=================================================================== b . 65.88 |=============================================================== c . 64.20 |============================================================== d . 70.68 |==================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 200.23 |=================================================================== b . 194.58 |================================================================= c . 179.43 |============================================================ d . 200.89 |=================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 68.09 |==================================================================== b . 64.51 |================================================================ c . 59.98 |============================================================ d . 68.13 |==================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 70.95 |==================================================================== b . 66.61 |================================================================ c . 65.82 |=============================================================== d . 70.84 |==================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.473424 |================================================================= b . 0.435521 |============================================================ c . 0.421012 |========================================================== d . 0.453065 |============================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 0.517125 |================================================================= b . 0.488444 |============================================================= c . 0.452827 |========================================================= d . 0.517357 |================================================================= ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 23.58 |================================================================== b . 22.84 |================================================================ c . 22.03 |============================================================== d . 24.17 |==================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 35.45 |==================================================================== b . 34.30 |================================================================== c . 33.02 |=============================================================== d . 35.20 |==================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 9.72552 |============================================================ b . 10.45900 |================================================================= c . 10.21010 |=============================================================== d . 10.08590 |=============================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 8.12987 |=========================================================== b . 9.04782 |================================================================== c . 8.46444 |============================================================== d . 8.43814 |============================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 276.40 |============================================================= b . 302.39 |=================================================================== c . 278.73 |============================================================== d . 281.41 |============================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 175.31 |========================================================== b . 204.12 |=================================================================== c . 193.21 |=============================================================== d . 169.40 |======================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 23.44 |============================================================ b . 26.61 |==================================================================== c . 26.20 |=================================================================== d . 23.60 |============================================================ ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 11.59 |============================================================= b . 12.68 |================================================================== c . 13.00 |==================================================================== d . 11.77 |============================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 6.84002 |========================================================== b . 7.73761 |================================================================== c . 7.36381 |=============================================================== d . 7.35405 |=============================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 6.18373 |================================================================== b . 6.15789 |================================================================= c . 6.22044 |================================================================== d . 6.22279 |================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 198.34 |=============================================================== b . 211.41 |=================================================================== c . 210.98 |=================================================================== d . 204.65 |================================================================= ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 131.68 |=========================================================== b . 150.11 |=================================================================== c . 149.27 |=================================================================== d . 138.64 |============================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 7.33407 |=========================================================== b . 7.61522 |============================================================= c . 8.27242 |================================================================== d . 7.35765 |=========================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1.99007 |============================================================= b . 1.95616 |============================================================ c . 2.16038 |================================================================== d . 1.97224 |============================================================ ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 1084.85 |========================================================== b . 1154.89 |============================================================== c . 1230.33 |================================================================== d . 1108.16 |=========================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 883.54 |============================================================ b . 969.78 |================================================================== c . 980.06 |=================================================================== d . 908.85 |============================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 91.42 |============================================================== b . 97.83 |=================================================================== c . 99.67 |==================================================================== d . 97.79 |=================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 51.36 |============================================================ b . 52.20 |============================================================= c . 57.75 |==================================================================== d . 51.80 |============================================================= ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 14.30 |============================================================== b . 15.18 |================================================================== c . 15.57 |==================================================================== d . 14.14 |============================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.99245 |=========================================================== b . 5.13748 |============================================================= c . 5.57084 |================================================================== d . 4.97589 |=========================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 14.69 |============================================================ b . 15.50 |=============================================================== c . 16.67 |==================================================================== d . 14.68 |============================================================ ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 14.09 |=============================================================== b . 15.01 |=================================================================== c . 15.19 |==================================================================== d . 14.11 |=============================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 2112.26 |=========================================================== b . 2296.09 |================================================================ c . 2375.22 |================================================================== d . 2207.18 |============================================================= ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1933.76 |========================================================== b . 2047.31 |============================================================= c . 2208.35 |================================================================== d . 1932.89 |========================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 42.41 |================================================================ b . 43.78 |================================================================== c . 45.38 |==================================================================== d . 41.36 |============================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 28.21 |=============================================================== b . 29.15 |================================================================= c . 30.28 |==================================================================== d . 28.40 |================================================================ Whisperfile 20Aug24 Model Size: Tiny Seconds < Lower Is Better a . 54.41 |=============================================================== b . 56.77 |================================================================== c . 58.68 |==================================================================== d . 55.22 |================================================================ Whisperfile 20Aug24 Model Size: Small Seconds < Lower Is Better a . 262.41 |=============================================================== b . 271.83 |================================================================== c . 277.33 |=================================================================== d . 268.06 |================================================================= Whisperfile 20Aug24 Model Size: Medium Seconds < Lower Is Better a . 765.36 |================================================================= b . 782.84 |=================================================================== c . 785.55 |=================================================================== d . 761.26 |=================================================================