auf11 AMD Ryzen AI 9 HX 370 testing with a ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS) and AMD Radeon 512MB on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2408115-NE-AUF11752014&grs&sro .
auf11 Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b c d AMD Ryzen AI 9 HX 370 @ 4.37GHz (12 Cores / 24 Threads) ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS) AMD Device 1507 4 x 8GB LPDDR5-7500MT/s Samsung K3KL9L90CM-MGCT 1024GB MTFDKBA1T0QFM-1BD1AABGB AMD Radeon 512MB AMD Rembrandt Radeon HD Audio MEDIATEK Device 7925 Ubuntu 24.04 6.10.0-phx (x86_64) GNOME Shell 46.0 X Server 1.21.1.11 + Wayland 4.6 Mesa 24.3~git2407230600.74b4c9~oibaf~n (git-74b4c91 2024-07-23 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57) GCC 13.2.0 ext4 2880x1800 OpenBenchmarking.org Kernel Details - amdgpu.dcdebugmask=0x600 - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - Platform Profile: balanced - CPU Microcode: 0xb204011 - ACPI Profile: balanced Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
auf11 simdjson: Kostya simdjson: TopTweet lczero: BLAS xnnpack: FP32MobileNetV3Small xnnpack: FP32MobileNetV2 xnnpack: FP32MobileNetV3Large mnn: nasnet xnnpack: FP16MobileNetV2 xnnpack: QU8MobileNetV3Large lczero: Eigen simdjson: DistinctUserID simdjson: PartialTweets xnnpack: QU8MobileNetV2 mnn: inception-v3 xnnpack: FP16MobileNetV3Small mnn: MobileNetV2_224 xnnpack: QU8MobileNetV3Small mnn: squeezenetv1.1 mnn: resnet-v2-50 xnnpack: FP16MobileNetV3Large y-cruncher: 1B mnn: SqueezeNetV1.0 y-cruncher: 500M mnn: mobilenet-v1-1.0 mnn: mobilenetV3 simdjson: LargeRand a b c d 4.49 7.08 55 995 1961 2255 14.455 2178 2001 43 7.76 6.83 1745 30.431 1252 3.579 1055 2.99 16.022 2535 27.638 4.62 12.488 3.606 1.936 1.24 4.5 9 61 1145 2238 2530 16.304 2383 1819 39 7.06 6.39 1651 32.066 1201 3.692 1017 2.919 15.654 2485 28.244 4.575 12.738 3.666 1.945 1.24 5.9 8.99 67 1169 2124 2388 15.493 2423 1802 41 7.17 6.36 1626 32.233 1184 3.773 1017 3.025 16.183 2554 28.166 4.563 12.694 3.672 1.946 1.25 4.45 6.93 52 1180 2270 2585 16.452 2377 1907 43 7.18 6.76 1674 31.042 1205 3.746 1059 2.936 16.021 2481 28.249 4.662 12.517 3.651 1.917 1.25 OpenBenchmarking.org
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: Kostya a b c d 1.3275 2.655 3.9825 5.31 6.6375 4.49 4.50 5.90 4.45 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: TopTweet a b c d 3 6 9 12 15 7.08 9.00 8.99 6.93 1. (CXX) g++ options: -O3 -lrt
LeelaChessZero Backend: BLAS OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.31.1 Backend: BLAS a b c d 15 30 45 60 75 55 61 67 52 1. (CXX) g++ options: -flto -pthread
XNNPACK Model: FP32MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Small a b c d 300 600 900 1200 1500 995 1145 1169 1180 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV2 a b c d 500 1000 1500 2000 2500 1961 2238 2124 2270 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Large a b c d 600 1200 1800 2400 3000 2255 2530 2388 2585 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: nasnet a b c d 4 8 12 16 20 14.46 16.30 15.49 16.45 MIN: 13.96 / MAX: 46.22 MIN: 15.12 / MAX: 38.62 MIN: 14.46 / MAX: 38.16 MIN: 14.97 / MAX: 39.96 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP16MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV2 a b c d 500 1000 1500 2000 2500 2178 2383 2423 2377 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Large a b c d 400 800 1200 1600 2000 2001 1819 1802 1907 1. (CXX) g++ options: -O3 -lrt -lm
LeelaChessZero Backend: Eigen OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.31.1 Backend: Eigen a b c d 10 20 30 40 50 43 39 41 43 1. (CXX) g++ options: -flto -pthread
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: DistinctUserID a b c d 2 4 6 8 10 7.76 7.06 7.17 7.18 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: PartialTweets a b c d 2 4 6 8 10 6.83 6.39 6.36 6.76 1. (CXX) g++ options: -O3 -lrt
XNNPACK Model: QU8MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV2 a b c d 400 800 1200 1600 2000 1745 1651 1626 1674 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: inception-v3 a b c d 7 14 21 28 35 30.43 32.07 32.23 31.04 MIN: 28.63 / MAX: 62.24 MIN: 29.4 / MAX: 63.3 MIN: 29.38 / MAX: 67.1 MIN: 27.39 / MAX: 59.74 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP16MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Small a b c d 300 600 900 1200 1500 1252 1201 1184 1205 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 a b c d 0.8489 1.6978 2.5467 3.3956 4.2445 3.579 3.692 3.773 3.746 MIN: 3.47 / MAX: 29.51 MIN: 3.49 / MAX: 30.84 MIN: 3.44 / MAX: 22.94 MIN: 3.59 / MAX: 6.32 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: QU8MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Small a b c d 200 400 600 800 1000 1055 1017 1017 1059 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 a b c d 0.6806 1.3612 2.0418 2.7224 3.403 2.990 2.919 3.025 2.936 MIN: 2.84 / MAX: 29.21 MIN: 2.88 / MAX: 5.07 MIN: 2.88 / MAX: 5.62 MIN: 2.88 / MAX: 4.96 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 a b c d 4 8 12 16 20 16.02 15.65 16.18 16.02 MIN: 15.02 / MAX: 47.23 MIN: 15.06 / MAX: 33.59 MIN: 15.15 / MAX: 49.82 MIN: 15.25 / MAX: 41.56 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP16MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Large a b c d 500 1000 1500 2000 2500 2535 2485 2554 2481 1. (CXX) g++ options: -O3 -lrt -lm
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 1B a b c d 7 14 21 28 35 27.64 28.24 28.17 28.25
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 a b c d 1.049 2.098 3.147 4.196 5.245 4.620 4.575 4.563 4.662 MIN: 4.42 / MAX: 35.65 MIN: 4.36 / MAX: 21.58 MIN: 4.38 / MAX: 22.95 MIN: 4.35 / MAX: 7.56 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 500M a b c d 3 6 9 12 15 12.49 12.74 12.69 12.52
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 a b c d 0.8262 1.6524 2.4786 3.3048 4.131 3.606 3.666 3.672 3.651 MIN: 3.32 / MAX: 8.39 MIN: 3.42 / MAX: 9.6 MIN: 3.39 / MAX: 25.1 MIN: 3.39 / MAX: 9.25 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 a b c d 0.4379 0.8758 1.3137 1.7516 2.1895 1.936 1.945 1.946 1.917 MIN: 1.71 / MAX: 24.23 MIN: 1.73 / MAX: 28.5 MIN: 1.78 / MAX: 13.45 MIN: 1.76 / MAX: 5.4 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: LargeRandom a b c d 0.2813 0.5626 0.8439 1.1252 1.4065 1.24 1.24 1.25 1.25 1. (CXX) g++ options: -O3 -lrt
Phoronix Test Suite v10.8.5