new 79 AMD Ryzen 9 7950X 16-Core testing with a ASUS ROG STRIX X670E-E GAMING WIFI (2124 BIOS) and AMD Radeon RX 7900 GRE 16GB on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2408137-NE-NEW79606155&grr .
new 79 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b c d AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads) ASUS ROG STRIX X670E-E GAMING WIFI (2124 BIOS) AMD Device 14d8 2 x 16GB DDR5-6000MT/s G Skill F5-6000J3038F16G 2000GB Corsair MP700 PRO AMD Radeon RX 7900 GRE 16GB AMD Navi 31 HDMI/DP DELL U2723QE Intel I225-V + Intel Wi-Fi 6E Ubuntu 24.04 6.10.0-phx (x86_64) GNOME Shell 46.0 X Server + Wayland 4.6 Mesa 24.2~git2406040600.8112d4~oibaf~n (git-8112d44 2024-06-04 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57) GCC 13.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa601206 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
new 79 blender: Barbershop - CPU-Only lczero: Eigen lczero: BLAS xnnpack: QU8MobileNetV3Small xnnpack: QU8MobileNetV3Large xnnpack: QU8MobileNetV2 xnnpack: FP16MobileNetV3Small xnnpack: FP16MobileNetV3Large xnnpack: FP16MobileNetV2 xnnpack: FP32MobileNetV3Small xnnpack: FP32MobileNetV3Large xnnpack: FP32MobileNetV2 stockfish: Chess Benchmark blender: Pabellon Barcelona - CPU-Only ospray: particle_volume/scivis/real_time blender: Classroom - CPU-Only ospray: particle_volume/pathtracer/real_time mnn: inception-v3 mnn: mobilenet-v1-1.0 mnn: MobileNetV2_224 mnn: SqueezeNetV1.0 mnn: resnet-v2-50 mnn: squeezenetv1.1 mnn: mobilenetV3 mnn: nasnet ospray: particle_volume/ao/real_time simdjson: Kostya build2: Time To Compile simdjson: LargeRand simdjson: DistinctUserID simdjson: TopTweet simdjson: PartialTweets blender: Junkshop - CPU-Only blender: Fishy Cat - CPU-Only gromacs: MPI CPU - water_GMX50_bare ospray: gravity_spheres_volume/dim_512/scivis/real_time ospray: gravity_spheres_volume/dim_512/ao/real_time ospray: gravity_spheres_volume/dim_512/pathtracer/real_time blender: BMW27 - CPU-Only mt-dgemm: Sustained Floating-Point Rate etcpak: Multi-Threaded - ETC2 compress-7zip: Decompression Rating compress-7zip: Compression Rating povray: Trace Time y-cruncher: 1B x265: Bosphorus 4K y-cruncher: 500M x265: Bosphorus 1080p a b c d 464.08 174 176 706 1092 811 721 1236 982 749 1503 1221 51640585 159.08 9.18083 139.45 241.036 22.421 2.22 2.756 3.502 10.514 2.386 1.663 10.696 9.22669 6 86.648 1.82 10.03 10.27 9.69 69.44 68.17 2.73 7.93212 8.08388 9.21506 51.94 1157.359703 596.921 175017 175582 17.895 16.053 35.58 7.718 123.85 465.26 175 174 706 1090 811 718 1241 983 755 1495 1212 49624569 160.32 9.16838 139.98 238.896 21.953 2.251 2.797 3.537 11.142 2.358 1.671 10.754 9.20103 6.08 86.929 1.83 10.25 10.26 9.41 69.45 68.44 2.735 7.95178 8.07985 9.19495 51.90 1152.614643 591.475 175386 178764 17.954 16.038 35.45 7.698 123.96 465.19 179 172 709 1099 809 727 1244 984 732 1498 1223 50637694 160.37 9.14803 140.22 238.898 22.644 2.329 2.784 3.604 11.314 2.341 1.649 11.244 9.20827 5.94 88.549 1.8 10.19 10.39 9.76 69.27 68.96 2.737 7.82786 8.02187 9.21053 52.15 1153.267425 593.154 175290 178906 17.984 15.959 35.81 7.691 123.71 465.51 171 171 709 1100 809 719 1232 981 757 1492 1223 48528523 159.29 9.16006 139.57 239.973 22.202 2.223 2.774 3.484 11.568 2.37 1.677 10.682 9.18898 6 85.642 1.81 10.24 10.2 9.36 69.11 68.54 2.736 7.96421 8.08799 9.18853 51.8 1153.572488 588.463 175073 177610 18.542 16.022 35.55 7.713 124.01 OpenBenchmarking.org
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Barbershop - Compute: CPU-Only a b c d 100 200 300 400 500 SE +/- 0.29, N = 3 464.08 465.26 465.19 465.51
LeelaChessZero Backend: Eigen OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.31.1 Backend: Eigen a b c d 40 80 120 160 200 SE +/- 0.67, N = 3 174 175 179 171 1. (CXX) g++ options: -flto -pthread
LeelaChessZero Backend: BLAS OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.31.1 Backend: BLAS a b c d 40 80 120 160 200 SE +/- 2.19, N = 3 176 174 172 171 1. (CXX) g++ options: -flto -pthread
XNNPACK Model: QU8MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Small a b c d 150 300 450 600 750 SE +/- 0.67, N = 3 706 706 709 709 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Large a b c d 200 400 600 800 1000 SE +/- 4.00, N = 3 1092 1090 1099 1100 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV2 a b c d 200 400 600 800 1000 SE +/- 2.85, N = 3 811 811 809 809 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Small a b c d 160 320 480 640 800 SE +/- 8.50, N = 3 721 718 727 719 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Large a b c d 300 600 900 1200 1500 SE +/- 6.23, N = 3 1236 1241 1244 1232 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV2 a b c d 200 400 600 800 1000 SE +/- 8.67, N = 3 982 983 984 981 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Small a b c d 160 320 480 640 800 SE +/- 2.31, N = 3 749 755 732 757 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Large a b c d 300 600 900 1200 1500 SE +/- 16.05, N = 3 1503 1495 1498 1492 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV2 a b c d 300 600 900 1200 1500 SE +/- 15.51, N = 3 1221 1212 1223 1223 1. (CXX) g++ options: -O3 -lrt -lm
Stockfish Chess Benchmark OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish Chess Benchmark a b c d 11M 22M 33M 44M 55M SE +/- 478873.94, N = 15 51640585 49624569 50637694 48528523 1. Stockfish 16 by the Stockfish developers (see AUTHORS file)
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Pabellon Barcelona - Compute: CPU-Only a b c d 40 80 120 160 200 SE +/- 0.07, N = 3 159.08 160.32 160.37 159.29
OSPRay Benchmark: particle_volume/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/scivis/real_time a b c d 3 6 9 12 15 SE +/- 0.01053, N = 3 9.18083 9.16838 9.14803 9.16006
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Classroom - Compute: CPU-Only a b c d 30 60 90 120 150 SE +/- 0.16, N = 3 139.45 139.98 140.22 139.57
OSPRay Benchmark: particle_volume/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/pathtracer/real_time a b c d 50 100 150 200 250 SE +/- 0.46, N = 3 241.04 238.90 238.90 239.97
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: inception-v3 a b c d 5 10 15 20 25 SE +/- 0.33, N = 3 22.42 21.95 22.64 22.20 MIN: 18.03 / MAX: 47.92 MIN: 18.2 / MAX: 59.41 MIN: 19.17 / MAX: 58 MIN: 17.83 / MAX: 52.42 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 a b c d 0.524 1.048 1.572 2.096 2.62 SE +/- 0.012, N = 3 2.220 2.251 2.329 2.223 MIN: 1.99 / MAX: 3.92 MIN: 2.01 / MAX: 7.63 MIN: 2.02 / MAX: 9.6 MIN: 2.03 / MAX: 3.7 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 a b c d 0.6293 1.2586 1.8879 2.5172 3.1465 SE +/- 0.057, N = 3 2.756 2.797 2.784 2.774 MIN: 2.59 / MAX: 6.62 MIN: 2.46 / MAX: 5.73 MIN: 2.61 / MAX: 6.04 MIN: 2.51 / MAX: 11.65 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 a b c d 0.8109 1.6218 2.4327 3.2436 4.0545 SE +/- 0.041, N = 3 3.502 3.537 3.604 3.484 MIN: 3.17 / MAX: 8.74 MIN: 3.2 / MAX: 12.47 MIN: 3.22 / MAX: 23.52 MIN: 3.24 / MAX: 8.25 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 a b c d 3 6 9 12 15 SE +/- 0.08, N = 3 10.51 11.14 11.31 11.57 MIN: 9.86 / MAX: 21.12 MIN: 9.74 / MAX: 33.28 MIN: 10.05 / MAX: 25.26 MIN: 10.13 / MAX: 26.67 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 a b c d 0.5369 1.0738 1.6107 2.1476 2.6845 SE +/- 0.005, N = 3 2.386 2.358 2.341 2.370 MIN: 2.24 / MAX: 10.51 MIN: 2.09 / MAX: 10.87 MIN: 2.14 / MAX: 5.79 MIN: 2.12 / MAX: 5.54 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 a b c d 0.3773 0.7546 1.1319 1.5092 1.8865 SE +/- 0.062, N = 3 1.663 1.671 1.649 1.677 MIN: 1.36 / MAX: 4.03 MIN: 1.34 / MAX: 4.25 MIN: 1.44 / MAX: 4.09 MIN: 1.41 / MAX: 5.35 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: nasnet a b c d 3 6 9 12 15 SE +/- 0.10, N = 3 10.70 10.75 11.24 10.68 MIN: 9.44 / MAX: 27.79 MIN: 9.18 / MAX: 27.8 MIN: 9.53 / MAX: 50.85 MIN: 9.44 / MAX: 20.82 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
OSPRay Benchmark: particle_volume/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/ao/real_time a b c d 3 6 9 12 15 SE +/- 0.00159, N = 3 9.22669 9.20103 9.20827 9.18898
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: Kostya a b c d 2 4 6 8 10 SE +/- 0.01, N = 3 6.00 6.08 5.94 6.00 1. (CXX) g++ options: -O3 -lrt
Build2 Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.17 Time To Compile a b c d 20 40 60 80 100 SE +/- 0.57, N = 3 86.65 86.93 88.55 85.64
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: LargeRandom a b c d 0.4118 0.8236 1.2354 1.6472 2.059 SE +/- 0.00, N = 3 1.82 1.83 1.80 1.81 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: DistinctUserID a b c d 3 6 9 12 15 SE +/- 0.10, N = 3 10.03 10.25 10.19 10.24 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: TopTweet a b c d 3 6 9 12 15 SE +/- 0.06, N = 3 10.27 10.26 10.39 10.20 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: PartialTweets a b c d 3 6 9 12 15 SE +/- 0.01, N = 3 9.69 9.41 9.76 9.36 1. (CXX) g++ options: -O3 -lrt
Blender Blend File: Junkshop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Junkshop - Compute: CPU-Only a b c d 15 30 45 60 75 SE +/- 0.12, N = 3 69.44 69.45 69.27 69.11
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Fishy Cat - Compute: CPU-Only a b c d 15 30 45 60 75 SE +/- 0.05, N = 3 68.17 68.44 68.96 68.54
GROMACS Implementation: MPI CPU - Input: water_GMX50_bare OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2024 Implementation: MPI CPU - Input: water_GMX50_bare a b c d 0.6158 1.2316 1.8474 2.4632 3.079 SE +/- 0.002, N = 3 2.730 2.735 2.737 2.736 1. (CXX) g++ options: -O3 -lm
OSPRay Benchmark: gravity_spheres_volume/dim_512/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time a b c d 2 4 6 8 10 SE +/- 0.02157, N = 3 7.93212 7.95178 7.82786 7.96421
OSPRay Benchmark: gravity_spheres_volume/dim_512/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/ao/real_time a b c d 2 4 6 8 10 SE +/- 0.02569, N = 3 8.08388 8.07985 8.02187 8.08799
OSPRay Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time a b c d 3 6 9 12 15 SE +/- 0.00896, N = 3 9.21506 9.19495 9.21053 9.18853
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: BMW27 - Compute: CPU-Only a b c d 12 24 36 48 60 SE +/- 0.09, N = 3 51.94 51.90 52.15 51.80
ACES DGEMM Sustained Floating-Point Rate OpenBenchmarking.org GFLOP/s, More Is Better ACES DGEMM 1.0 Sustained Floating-Point Rate a b c d 200 400 600 800 1000 SE +/- 0.70, N = 3 1157.36 1152.61 1153.27 1153.57 1. (CC) gcc options: -ffast-math -mavx2 -O3 -fopenmp -lopenblas
Etcpak Benchmark: Multi-Threaded - Configuration: ETC2 OpenBenchmarking.org Mpx/s, More Is Better Etcpak 2.0 Benchmark: Multi-Threaded - Configuration: ETC2 a b c d 130 260 390 520 650 SE +/- 0.30, N = 3 596.92 591.48 593.15 588.46 1. (CXX) g++ options: -flto -pthread
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression Test: Decompression Rating a b c d 40K 80K 120K 160K 200K SE +/- 39.50, N = 3 175017 175386 175290 175073 1. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression Test: Compression Rating a b c d 40K 80K 120K 160K 200K SE +/- 156.70, N = 3 175582 178764 178906 177610 1. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20
POV-Ray Trace Time OpenBenchmarking.org Seconds, Fewer Is Better POV-Ray Trace Time a b c d 5 10 15 20 25 SE +/- 0.02, N = 3 17.90 17.95 17.98 18.54 1. POV-Ray 3.7.0.10.unofficial
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 1B a b c d 4 8 12 16 20 SE +/- 0.01, N = 3 16.05 16.04 15.96 16.02
x265 Video Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better x265 Video Input: Bosphorus 4K a b c d 8 16 24 32 40 SE +/- 0.13, N = 3 35.58 35.45 35.81 35.55 1. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 500M a b c d 2 4 6 8 10 SE +/- 0.004, N = 3 7.718 7.698 7.691 7.713
x265 Video Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better x265 Video Input: Bosphorus 1080p a b c d 30 60 90 120 150 SE +/- 0.21, N = 3 123.85 123.96 123.71 124.01 1. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6
Phoronix Test Suite v10.8.5