new stuff Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1662 BIOS) and XFX AMD Radeon RX 7900 XTX 24GB on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2408118-PTS-NEWSTUFF04&grs&sor .
new stuff Processor Motherboard Chipset Memory Disk Graphics Audio Monitor OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b c d Intel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads) ASUS PRIME Z790-P WIFI (1662 BIOS) Intel Raptor Lake-S PCH 2 x 16GB DDR5-6000MT/s Corsair CMK32GX5M2B6000C36 Western Digital WD_BLACK SN850X 2000GB XFX AMD Radeon RX 7900 XTX 24GB Realtek ALC897 ASUS VP28U Ubuntu 24.04 6.10.0-061000rc6daily20240706-generic (x86_64) GNOME Shell 46.0 X Server 1.21.1.11 + Wayland 4.6 Mesa 24.2~git2407080600.801ed4~oibaf~n (git-801ed4d 2024-07-08 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57) GCC 13.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x129 - Thermald 2.5.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Mitigation of Clear Register File + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; RSB filling; PBRSB-eIBRS: SW sequence; BHI: BHI_DIS_S + srbds: Not affected + tsx_async_abort: Not affected
new stuff xnnpack: QU8MobileNetV3Small xnnpack: FP16MobileNetV3Small xnnpack: FP32MobileNetV2 simdjson: TopTweet xnnpack: FP32MobileNetV3Large mnn: mobilenetV3 mnn: squeezenetv1.1 lczero: Eigen lczero: BLAS mnn: nasnet xnnpack: QU8MobileNetV2 y-cruncher: 500M mnn: SqueezeNetV1.0 xnnpack: FP32MobileNetV3Small xnnpack: QU8MobileNetV3Large mnn: resnet-v2-50 mnn: mobilenet-v1-1.0 y-cruncher: 1B xnnpack: FP16MobileNetV2 mnn: inception-v3 xnnpack: FP16MobileNetV3Large simdjson: LargeRand simdjson: PartialTweets simdjson: Kostya mnn: MobileNetV2_224 simdjson: DistinctUserID a b c d 719 781 1122 7.10 1599 0.888 1.756 182 201 6.873 949 9.822 2.742 698 1167 12.647 2.038 21.576 1309 15.983 1463 1.99 8.18 5.36 1.895 8.37 740 976 1107 8.51 1492 0.887 1.804 175 201 6.656 974 9.777 2.808 676 1195 12.48 1.99 21.368 1285 16.011 1462 1.99 8.18 5.36 1.934 7.22 717 790 1183 8.52 1500 0.969 1.729 173 206 6.695 944 9.474 2.835 689 1160 12.782 1.985 21.048 1311 15.858 1447 1.99 8.19 5.36 1.936 8.67 1192 765 1411 8.52 1648 0.901 1.657 183 195 6.574 932 9.847 2.839 686 1164 12.434 2.026 21.284 1282 15.821 1446 1.98 8.18 5.36 1.831 7.21 OpenBenchmarking.org
XNNPACK Model: QU8MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Small c a b d 300 600 900 1200 1500 SE +/- 4.84, N = 3 717 719 740 1192 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Small d a c b 200 400 600 800 1000 SE +/- 6.39, N = 3 765 781 790 976 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV2 b a c d 300 600 900 1200 1500 SE +/- 11.92, N = 3 1107 1122 1183 1411 1. (CXX) g++ options: -O3 -lrt -lm
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: TopTweet d c b a 2 4 6 8 10 SE +/- 0.00, N = 3 8.52 8.52 8.51 7.10 1. (CXX) g++ options: -O3 -lrt
XNNPACK Model: FP32MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Large b c a d 400 800 1200 1600 2000 SE +/- 32.04, N = 3 1492 1500 1599 1648 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 b a d c 0.218 0.436 0.654 0.872 1.09 SE +/- 0.001, N = 3 0.887 0.888 0.901 0.969 MIN: 0.86 / MAX: 1.16 MIN: 0.86 / MAX: 1.26 MIN: 0.88 / MAX: 1.25 MIN: 0.87 / MAX: 1.99 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 d c a b 0.4059 0.8118 1.2177 1.6236 2.0295 SE +/- 0.044, N = 3 1.657 1.729 1.756 1.804 MIN: 1.55 / MAX: 3.68 MIN: 1.61 / MAX: 2.89 MIN: 1.56 / MAX: 3.64 MIN: 1.61 / MAX: 3.68 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
LeelaChessZero Backend: Eigen OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.31.1 Backend: Eigen d a b c 40 80 120 160 200 SE +/- 1.67, N = 3 SE +/- 1.20, N = 3 183 182 175 173 1. (CXX) g++ options: -flto -pthread
LeelaChessZero Backend: BLAS OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.31.1 Backend: BLAS c b a d 50 100 150 200 250 SE +/- 1.00, N = 3 SE +/- 2.65, N = 3 206 201 201 195 1. (CXX) g++ options: -flto -pthread
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: nasnet d b c a 2 4 6 8 10 SE +/- 0.086, N = 3 6.574 6.656 6.695 6.873 MIN: 6.25 / MAX: 12.43 MIN: 6.26 / MAX: 13.26 MIN: 6.3 / MAX: 13.41 MIN: 6.27 / MAX: 17.19 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: QU8MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV2 d c a b 200 400 600 800 1000 SE +/- 7.17, N = 3 932 944 949 974 1. (CXX) g++ options: -O3 -lrt -lm
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 500M c b a d 3 6 9 12 15 SE +/- 0.131, N = 3 9.474 9.777 9.822 9.847
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 a b c d 0.6388 1.2776 1.9164 2.5552 3.194 SE +/- 0.027, N = 3 2.742 2.808 2.835 2.839 MIN: 2.51 / MAX: 6.32 MIN: 2.51 / MAX: 5.35 MIN: 2.61 / MAX: 4.48 MIN: 2.53 / MAX: 4.59 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP32MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Small b d c a 150 300 450 600 750 SE +/- 0.67, N = 3 676 686 689 698 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Large c d a b 300 600 900 1200 1500 SE +/- 9.33, N = 3 1160 1164 1167 1195 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 d b a c 3 6 9 12 15 SE +/- 0.04, N = 3 12.43 12.48 12.65 12.78 MIN: 11.91 / MAX: 23.8 MIN: 11.78 / MAX: 24.73 MIN: 11.77 / MAX: 24.63 MIN: 12.04 / MAX: 24.51 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 c b d a 0.4586 0.9172 1.3758 1.8344 2.293 SE +/- 0.030, N = 3 1.985 1.990 2.026 2.038 MIN: 1.79 / MAX: 2.54 MIN: 1.79 / MAX: 2.23 MIN: 1.79 / MAX: 2.99 MIN: 1.79 / MAX: 5.04 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 1B c d b a 5 10 15 20 25 SE +/- 0.25, N = 3 21.05 21.28 21.37 21.58
XNNPACK Model: FP16MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV2 d b a c 300 600 900 1200 1500 SE +/- 11.05, N = 3 1282 1285 1309 1311 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: inception-v3 d c a b 4 8 12 16 20 SE +/- 0.05, N = 3 15.82 15.86 15.98 16.01 MIN: 14.77 / MAX: 31.16 MIN: 14.73 / MAX: 30.32 MIN: 14.76 / MAX: 31.12 MIN: 14.85 / MAX: 31.93 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP16MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Large d c b a 300 600 900 1200 1500 SE +/- 9.49, N = 3 1446 1447 1462 1463 1. (CXX) g++ options: -O3 -lrt -lm
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: LargeRandom c b a d 0.4478 0.8956 1.3434 1.7912 2.239 SE +/- 0.00, N = 3 1.99 1.99 1.99 1.98 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: PartialTweets c d b a 2 4 6 8 10 SE +/- 0.00, N = 3 8.19 8.18 8.18 8.18 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: Kostya d c b a 1.206 2.412 3.618 4.824 6.03 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 5.36 5.36 5.36 5.36 1. (CXX) g++ options: -O3 -lrt
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 d a b c 0.4356 0.8712 1.3068 1.7424 2.178 SE +/- 0.067, N = 3 1.831 1.895 1.934 1.936 MIN: 1.78 / MAX: 2.14 MIN: 1.72 / MAX: 3.17 MIN: 1.73 / MAX: 3.11 MIN: 1.71 / MAX: 2.97 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: DistinctUserID c a b d 2 4 6 8 10 SE +/- 0.15, N = 15 8.67 8.37 7.22 7.21 1. (CXX) g++ options: -O3 -lrt
Phoronix Test Suite v10.8.5