auf11 AMD Ryzen AI 9 HX 370 testing with a ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS) and AMD Radeon 512MB on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2408115-NE-AUF11752014&grs&sor .
auf11 Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b c d AMD Ryzen AI 9 HX 370 @ 4.37GHz (12 Cores / 24 Threads) ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 (UM5606WA.308 BIOS) AMD Device 1507 4 x 8GB LPDDR5-7500MT/s Samsung K3KL9L90CM-MGCT 1024GB MTFDKBA1T0QFM-1BD1AABGB AMD Radeon 512MB AMD Rembrandt Radeon HD Audio MEDIATEK Device 7925 Ubuntu 24.04 6.10.0-phx (x86_64) GNOME Shell 46.0 X Server 1.21.1.11 + Wayland 4.6 Mesa 24.3~git2407230600.74b4c9~oibaf~n (git-74b4c91 2024-07-23 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57) GCC 13.2.0 ext4 2880x1800 OpenBenchmarking.org Kernel Details - amdgpu.dcdebugmask=0x600 - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - Platform Profile: balanced - CPU Microcode: 0xb204011 - ACPI Profile: balanced Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
auf11 simdjson: Kostya simdjson: TopTweet lczero: BLAS xnnpack: FP32MobileNetV3Small xnnpack: FP32MobileNetV2 xnnpack: FP32MobileNetV3Large mnn: nasnet xnnpack: FP16MobileNetV2 xnnpack: QU8MobileNetV3Large lczero: Eigen simdjson: DistinctUserID simdjson: PartialTweets xnnpack: QU8MobileNetV2 mnn: inception-v3 xnnpack: FP16MobileNetV3Small mnn: MobileNetV2_224 xnnpack: QU8MobileNetV3Small mnn: squeezenetv1.1 mnn: resnet-v2-50 xnnpack: FP16MobileNetV3Large y-cruncher: 1B mnn: SqueezeNetV1.0 y-cruncher: 500M mnn: mobilenet-v1-1.0 mnn: mobilenetV3 simdjson: LargeRand a b c d 4.49 7.08 55 995 1961 2255 14.455 2178 2001 43 7.76 6.83 1745 30.431 1252 3.579 1055 2.99 16.022 2535 27.638 4.62 12.488 3.606 1.936 1.24 4.5 9 61 1145 2238 2530 16.304 2383 1819 39 7.06 6.39 1651 32.066 1201 3.692 1017 2.919 15.654 2485 28.244 4.575 12.738 3.666 1.945 1.24 5.9 8.99 67 1169 2124 2388 15.493 2423 1802 41 7.17 6.36 1626 32.233 1184 3.773 1017 3.025 16.183 2554 28.166 4.563 12.694 3.672 1.946 1.25 4.45 6.93 52 1180 2270 2585 16.452 2377 1907 43 7.18 6.76 1674 31.042 1205 3.746 1059 2.936 16.021 2481 28.249 4.662 12.517 3.651 1.917 1.25 OpenBenchmarking.org
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: Kostya c b a d 1.3275 2.655 3.9825 5.31 6.6375 5.90 4.50 4.49 4.45 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: TopTweet b c a d 3 6 9 12 15 9.00 8.99 7.08 6.93 1. (CXX) g++ options: -O3 -lrt
LeelaChessZero Backend: BLAS OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.31.1 Backend: BLAS c b a d 15 30 45 60 75 67 61 55 52 1. (CXX) g++ options: -flto -pthread
XNNPACK Model: FP32MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Small a b c d 300 600 900 1200 1500 995 1145 1169 1180 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV2 a c b d 500 1000 1500 2000 2500 1961 2124 2238 2270 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Large a c b d 600 1200 1800 2400 3000 2255 2388 2530 2585 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: nasnet a c b d 4 8 12 16 20 14.46 15.49 16.30 16.45 MIN: 13.96 / MAX: 46.22 MIN: 14.46 / MAX: 38.16 MIN: 15.12 / MAX: 38.62 MIN: 14.97 / MAX: 39.96 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP16MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV2 a d b c 500 1000 1500 2000 2500 2178 2377 2383 2423 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Large c b d a 400 800 1200 1600 2000 1802 1819 1907 2001 1. (CXX) g++ options: -O3 -lrt -lm
LeelaChessZero Backend: Eigen OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.31.1 Backend: Eigen d a c b 10 20 30 40 50 43 43 41 39 1. (CXX) g++ options: -flto -pthread
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: DistinctUserID a d c b 2 4 6 8 10 7.76 7.18 7.17 7.06 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: PartialTweets a d b c 2 4 6 8 10 6.83 6.76 6.39 6.36 1. (CXX) g++ options: -O3 -lrt
XNNPACK Model: QU8MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV2 c b d a 400 800 1200 1600 2000 1626 1651 1674 1745 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: inception-v3 a d b c 7 14 21 28 35 30.43 31.04 32.07 32.23 MIN: 28.63 / MAX: 62.24 MIN: 27.39 / MAX: 59.74 MIN: 29.4 / MAX: 63.3 MIN: 29.38 / MAX: 67.1 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP16MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Small c b d a 300 600 900 1200 1500 1184 1201 1205 1252 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 a b d c 0.8489 1.6978 2.5467 3.3956 4.2445 3.579 3.692 3.746 3.773 MIN: 3.47 / MAX: 29.51 MIN: 3.49 / MAX: 30.84 MIN: 3.59 / MAX: 6.32 MIN: 3.44 / MAX: 22.94 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: QU8MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Small b c a d 200 400 600 800 1000 1017 1017 1055 1059 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 b d a c 0.6806 1.3612 2.0418 2.7224 3.403 2.919 2.936 2.990 3.025 MIN: 2.88 / MAX: 5.07 MIN: 2.88 / MAX: 4.96 MIN: 2.84 / MAX: 29.21 MIN: 2.88 / MAX: 5.62 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 b d a c 4 8 12 16 20 15.65 16.02 16.02 16.18 MIN: 15.06 / MAX: 33.59 MIN: 15.25 / MAX: 41.56 MIN: 15.02 / MAX: 47.23 MIN: 15.15 / MAX: 49.82 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP16MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Large d b a c 500 1000 1500 2000 2500 2481 2485 2535 2554 1. (CXX) g++ options: -O3 -lrt -lm
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 1B a c b d 7 14 21 28 35 27.64 28.17 28.24 28.25
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 c b a d 1.049 2.098 3.147 4.196 5.245 4.563 4.575 4.620 4.662 MIN: 4.38 / MAX: 22.95 MIN: 4.36 / MAX: 21.58 MIN: 4.42 / MAX: 35.65 MIN: 4.35 / MAX: 7.56 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 500M a d c b 3 6 9 12 15 12.49 12.52 12.69 12.74
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 a d b c 0.8262 1.6524 2.4786 3.3048 4.131 3.606 3.651 3.666 3.672 MIN: 3.32 / MAX: 8.39 MIN: 3.39 / MAX: 9.25 MIN: 3.42 / MAX: 9.6 MIN: 3.39 / MAX: 25.1 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 d a b c 0.4379 0.8758 1.3137 1.7516 2.1895 1.917 1.936 1.945 1.946 MIN: 1.76 / MAX: 5.4 MIN: 1.71 / MAX: 24.23 MIN: 1.73 / MAX: 28.5 MIN: 1.78 / MAX: 13.45 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: LargeRandom d c b a 0.2813 0.5626 0.8439 1.1252 1.4065 1.25 1.25 1.24 1.24 1. (CXX) g++ options: -O3 -lrt
Phoronix Test Suite v10.8.5