new satty AMD Ryzen AI 9 365 testing with a ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS) and AMD Radeon 512MB on Ubuntu 24.04 via the Phoronix Test Suite. a: Processor: AMD Ryzen AI 9 365 @ 4.31GHz (10 Cores / 20 Threads), Motherboard: ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS), Chipset: AMD Device 1507, Memory: 4 x 6GB LPDDR5-7500MT/s Micron MT62F1536M32D4DS-026, Disk: 1024GB MTFDKBA1T0QFM-1BD1AABGB, Graphics: AMD Radeon 512MB, Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK Device 7925 OS: Ubuntu 24.04, Kernel: 6.10.0-phx (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.3~git2407280600.a211a5~oibaf~n (git-a211a51 2024-07-28 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2880x1800 b: Processor: AMD Ryzen AI 9 365 @ 4.31GHz (10 Cores / 20 Threads), Motherboard: ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS), Chipset: AMD Device 1507, Memory: 4 x 6GB LPDDR5-7500MT/s Micron MT62F1536M32D4DS-026, Disk: 1024GB MTFDKBA1T0QFM-1BD1AABGB, Graphics: AMD Radeon 512MB, Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK Device 7925 OS: Ubuntu 24.04, Kernel: 6.10.0-phx (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.3~git2407280600.a211a5~oibaf~n (git-a211a51 2024-07-28 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2880x1800 c: Processor: AMD Ryzen AI 9 365 @ 4.31GHz (10 Cores / 20 Threads), Motherboard: ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS), Chipset: AMD Device 1507, Memory: 4 x 6GB LPDDR5-7500MT/s Micron MT62F1536M32D4DS-026, Disk: 1024GB MTFDKBA1T0QFM-1BD1AABGB, Graphics: AMD Radeon 512MB, Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK Device 7925 OS: Ubuntu 24.04, Kernel: 6.10.0-phx (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.3~git2407280600.a211a5~oibaf~n (git-a211a51 2024-07-28 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2880x1800 d: Processor: AMD Ryzen AI 9 365 @ 4.31GHz (10 Cores / 20 Threads), Motherboard: ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS), Chipset: AMD Device 1507, Memory: 4 x 6GB LPDDR5-7500MT/s Micron MT62F1536M32D4DS-026, Disk: 1024GB MTFDKBA1T0QFM-1BD1AABGB, Graphics: AMD Radeon 512MB, Audio: AMD Rembrandt Radeon HD Audio, Network: MEDIATEK Device 7925 OS: Ubuntu 24.04, Kernel: 6.10.0-phx (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.3~git2407280600.a211a5~oibaf~n (git-a211a51 2024-07-28 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 2880x1800 ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 96.85 |==================================================================== b . 95.59 |=================================================================== c . 97.09 |==================================================================== d . 94.37 |================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 10.32 |================================================================== b . 10.45 |=================================================================== c . 10.29 |================================================================== d . 10.59 |==================================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 130.75 |=================================================================== b . 128.26 |================================================================== c . 127.43 |================================================================= d . 120.74 |============================================================== ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 7.64170 |============================================================= b . 7.78951 |============================================================== c . 7.84183 |=============================================================== d . 8.27490 |================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 4.36533 |================================================================== b . 4.15031 |=============================================================== c . 4.36559 |================================================================== d . 3.95583 |============================================================ ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 229.07 |============================================================= b . 240.94 |================================================================ c . 229.06 |============================================================= d . 252.88 |=================================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 6.48303 |============================================================== b . 6.23528 |=========================================================== c . 6.92476 |================================================================== d . 5.89878 |======================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 154.25 |============================================================= b . 160.37 |=============================================================== c . 144.41 |========================================================= d . 169.62 |=================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 42.54 |================================================================= b . 41.98 |================================================================ c . 44.79 |==================================================================== d . 42.56 |================================================================= ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 23.50 |=================================================================== b . 23.82 |==================================================================== c . 22.32 |================================================================ d . 23.50 |=================================================================== ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 93.21 |==================================================================== b . 91.45 |=================================================================== c . 91.33 |=================================================================== d . 83.03 |============================================================= ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 10.73 |============================================================= b . 10.93 |============================================================== c . 10.95 |============================================================== d . 12.05 |==================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 118.52 |================================================================== b . 120.31 |=================================================================== c . 118.21 |================================================================== d . 117.35 |================================================================= ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 8.43551 |================================================================= b . 8.30904 |================================================================ c . 8.45631 |================================================================== d . 8.52036 |================================================================== ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 166.70 |================================================================== b . 167.57 |=================================================================== c . 168.37 |=================================================================== d . 164.45 |================================================================= ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 5.99672 |================================================================= b . 5.96555 |================================================================= c . 5.93691 |================================================================ d . 6.07861 |================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 5.52731 |============================================================== b . 5.50029 |============================================================== c . 5.89950 |================================================================== d . 5.42253 |============================================================= ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 180.91 |================================================================== b . 181.80 |================================================================== c . 169.50 |============================================================== d . 184.51 |=================================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 8.60423 |=============================================================== b . 8.48368 |============================================================== c . 8.99805 |================================================================== d . 7.96736 |========================================================== ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 116.22 |============================================================== b . 117.87 |=============================================================== c . 111.13 |=========================================================== d . 125.55 |=================================================================== ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 135.85 |=============================================================== b . 139.67 |================================================================= c . 144.83 |=================================================================== d . 139.87 |================================================================= ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 7.35929 |================================================================== b . 7.15790 |================================================================ c . 6.90258 |============================================================== d . 7.14811 |================================================================ ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 565.24 |================================================================== b . 565.48 |================================================================== c . 572.86 |=================================================================== d . 519.45 |============================================================= ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1.76819 |============================================================= b . 1.76734 |============================================================= c . 1.74432 |============================================================ d . 1.92558 |================================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.892606 |================================================================= b . 0.868291 |=============================================================== c . 0.895635 |================================================================= d . 0.834423 |============================================================= ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 1120.31 |============================================================== b . 1151.68 |=============================================================== c . 1116.52 |============================================================= d . 1198.57 |================================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 1.16523 |=============================================================== b . 1.15557 |=============================================================== c . 1.22009 |================================================================== d . 1.06101 |========================================================= ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 858.20 |============================================================= b . 865.31 |============================================================= c . 819.56 |========================================================== d . 942.94 |=================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 11.58 |=================================================================== b . 11.78 |==================================================================== c . 11.66 |=================================================================== d . 11.17 |================================================================ ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 86.34 |================================================================== b . 84.90 |================================================================ c . 85.79 |================================================================= d . 89.51 |==================================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 22.93 |================================================================== b . 22.99 |================================================================== c . 23.76 |==================================================================== d . 19.71 |======================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 43.61 |========================================================== b . 43.50 |========================================================== c . 42.08 |======================================================== d . 50.77 |==================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 74.69 |================================================================== b . 75.90 |=================================================================== c . 77.30 |==================================================================== d . 72.54 |================================================================ ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 13.39 |================================================================== b . 13.17 |================================================================= c . 12.93 |================================================================ d . 13.79 |==================================================================== ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 216.67 |================================================================ b . 222.38 |================================================================== c . 226.04 |=================================================================== d . 193.67 |========================================================= ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 4.61373 |=========================================================== b . 4.49478 |========================================================= c . 4.42207 |========================================================= d . 5.16479 |================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 70.12 |==================================================================== b . 69.13 |=================================================================== c . 68.94 |=================================================================== d . 64.08 |============================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 14.26 |============================================================== b . 14.46 |=============================================================== c . 14.50 |=============================================================== d . 15.61 |==================================================================== ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 80.99 |==================================================================== b . 75.86 |================================================================ c . 78.95 |================================================================== d . 71.94 |============================================================ ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 12.35 |============================================================ b . 13.18 |================================================================ c . 12.66 |============================================================== d . 13.90 |==================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 0.465907 |================================================================= b . 0.445551 |============================================================== c . 0.456424 |================================================================ d . 0.420335 |=========================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 2146.34 |============================================================ b . 2244.40 |============================================================== c . 2190.94 |============================================================= d . 2379.56 |================================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 0.534553 |================================================================= b . 0.533521 |================================================================= c . 0.526244 |================================================================ d . 0.471409 |========================================================= ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 1870.72 |========================================================== b . 1874.34 |========================================================== c . 1900.25 |=========================================================== d . 2122.37 |================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better a . 26.23 |=================================================================== b . 26.46 |==================================================================== c . 26.18 |=================================================================== d . 25.80 |================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better a . 38.13 |=================================================================== b . 37.80 |================================================================== c . 38.19 |=================================================================== d . 38.76 |==================================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inferences Per Second > Higher Is Better a . 39.79 |==================================================================== b . 39.42 |=================================================================== c . 39.71 |==================================================================== d . 38.06 |================================================================= ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard Inference Time Cost (ms) < Lower Is Better a . 25.13 |================================================================= b . 25.37 |================================================================== c . 25.18 |================================================================= d . 26.28 |==================================================================== simdjson 3.10 Throughput Test: Kostya GB/s > Higher Is Better a . 4.51 |===================================================================== b . 4.42 |==================================================================== c . 4.41 |=================================================================== d . 4.46 |==================================================================== simdjson 3.10 Throughput Test: TopTweet GB/s > Higher Is Better a . 6.88 |==================================================================== b . 6.91 |==================================================================== c . 6.94 |==================================================================== d . 7.02 |===================================================================== simdjson 3.10 Throughput Test: LargeRandom GB/s > Higher Is Better a . 1.25 |===================================================================== b . 1.25 |===================================================================== c . 1.25 |===================================================================== d . 1.24 |==================================================================== simdjson 3.10 Throughput Test: PartialTweets GB/s > Higher Is Better a . 8.74 |===================================================================== b . 6.74 |===================================================== c . 6.59 |==================================================== d . 6.83 |====================================================== simdjson 3.10 Throughput Test: DistinctUserID GB/s > Higher Is Better a . 7.09 |===================================================================== b . 6.91 |=================================================================== c . 6.90 |=================================================================== d . 7.04 |===================================================================== SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 3.728 |=================================================================== b . 3.491 |=============================================================== c . 3.796 |==================================================================== d . 3.410 |============================================================= SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 14.78 |==================================================================== b . 13.31 |============================================================= c . 14.88 |==================================================================== d . 12.94 |=========================================================== SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 30.83 |================================================================== b . 29.75 |================================================================ c . 31.64 |==================================================================== d . 29.26 |=============================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 113.83 |================================================================== b . 109.45 |=============================================================== c . 116.14 |=================================================================== d . 112.33 |================================================================= SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 11.91 |=================================================================== b . 11.44 |================================================================ c . 12.09 |==================================================================== d . 11.56 |================================================================= SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 43.91 |=================================================================== b . 41.95 |================================================================ c . 44.66 |==================================================================== d . 42.09 |================================================================ SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 106.46 |================================================================== b . 100.45 |=============================================================== c . 107.48 |=================================================================== d . 101.17 |=============================================================== SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 474.77 |=================================================================== b . 455.79 |================================================================ c . 476.22 |=================================================================== d . 460.25 |================================================================= SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 0.568 |=================================================================== b . 0.565 |=================================================================== c . 0.573 |==================================================================== d . 0.550 |================================================================= SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 2.573 |=================================================================== b . 2.557 |================================================================== c . 2.630 |==================================================================== d . 2.481 |================================================================ SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 3.694 |==================================================================== b . 3.606 |================================================================== c . 3.708 |==================================================================== d . 3.523 |================================================================= SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit Frames Per Second > Higher Is Better a . 6.164 |==================================================================== b . 6.158 |==================================================================== c . 6.135 |==================================================================== d . 6.143 |==================================================================== Whisperfile 20Aug24 Model Size: Tiny Seconds < Lower Is Better a . 52.71 |================================================================== b . 52.63 |================================================================= c . 52.50 |================================================================= d . 54.71 |==================================================================== Whisperfile 20Aug24 Model Size: Small Seconds < Lower Is Better a . 259.91 |================================================================= b . 261.74 |================================================================= c . 257.38 |================================================================ d . 269.18 |=================================================================== Whisperfile 20Aug24 Model Size: Medium Seconds < Lower Is Better a . 754.78 |=================================================================== b . 751.48 |=================================================================== c . 722.54 |================================================================ d . 752.93 |===================================================================