aiggy AMD Ryzen 7 3800XT 8-Core testing with a MSI X370 XPOWER GAMING TITANIUM (MS-7A31) v1.0 (1.MS BIOS) and Sapphire AMD Radeon HD 4650 on Debian 12 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2408158-NE-AIGGY940267&grs .
aiggy Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Display Server Compiler File-System Screen Resolution a b AMD Ryzen 7 3800XT 8-Core @ 3.90GHz (8 Cores / 16 Threads) MSI X370 XPOWER GAMING TITANIUM (MS-7A31) v1.0 (1.MS BIOS) AMD Starship/Matisse 2 x 8GB DDR4-3200MT/s CL16-18-18 D4-3200 128GB INTEL SSDPEKKW128G7 Sapphire AMD Radeon HD 4650 AMD RV710/730 Intel I211 Debian 12 6.1.0-22-amd64 (x86_64) X Server 1.20.11 GCC 12.2.0 ext4 1024x768 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: always Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701021 Python Details - Python 3.11.2 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
aiggy mnn: squeezenetv1.1 xnnpack: FP16MobileNetV2 xnnpack: QU8MobileNetV3Small xnnpack: FP16MobileNetV3Small compress-lz4: 9 - Compression Speed compress-lz4: 3 - Compression Speed mnn: resnet-v2-50 simdjson: TopTweet compress-lz4: 9 - Decompression Speed mnn: nasnet mnn: MobileNetV2_224 stockfish: Chess Benchmark compress-lz4: 3 - Decompression Speed compress-lz4: 12 - Compression Speed z3: 2.smt2 compress-lz4: 12 - Decompression Speed simdjson: LargeRand webp: Quality 100 webp: Default simdjson: PartialTweets webp: Quality 100, Lossless, Highest Compression compress-lz4: 1 - Compression Speed webp: Quality 100, Lossless mnn: SqueezeNetV1.0 z3: 1.smt2 compress-lz4: 1 - Decompression Speed mnn: mobilenetV3 compress-lz4: 2 - Compression Speed webp: Quality 100, Highest Compression gromacs: water_GMX50_bare simdjson: DistinctUserID mnn: inception-v3 ospray: particle_volume/ao/real_time build2: Time To Compile apache-siege: 1000 y-cruncher: 1B mnn: mobilenet-v1-1.0 y-cruncher: 500M ospray: particle_volume/pathtracer/real_time simdjson: Kostya ospray: gravity_spheres_volume/dim_512/pathtracer/real_time xnnpack: FP32MobileNetV2 ospray: gravity_spheres_volume/dim_512/scivis/real_time xnnpack: QU8MobileNetV3Large ospray: particle_volume/scivis/real_time c-ray: 1080p - 16 apache-siege: 200 compress-7zip: Decompression Rating compress-lz4: 2 - Decompression Speed ospray: gravity_spheres_volume/dim_512/ao/real_time compress-7zip: Compression Rating xnnpack: FP32MobileNetV3Small namd: STMV with 1,066,628 Atoms xnnpack: FP32MobileNetV3Large namd: ATPase with 327,506 Atoms c-ray: 4K - 16 apache-siege: 500 xnnpack: QU8MobileNetV2 xnnpack: FP16MobileNetV3Large oidn: RTLightmap.hdr.4096x4096 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only a b 3.209 2632 734 919 42.11 108.95 19.904 4.56 4581.9 12.629 3.106 18872171 4374.8 14.38 65.959 4733.6 1.09 11.45 18.82 4.32 0.66 736.83 1.67 5.78 28.113 4930.1 1.717 307.54 3.58 0.648 4.77 25.97 2.6666 201.124 22800.43 43.649 2.755 19.74 86.9299 3.03 2.01843 4103 1.22574 1494 2.65461 88.115 22884.61 63907 4180.5 1.28405 71271 991 0.16553 4336 0.55160 351.823 22852.53 1482 2332 0.17 0.33 0.33 2.748 2254 793 967 44.02 113.08 20.631 4.71 4731.3 13.037 3.017 18399415 4485.8 14.04 67.5 4644.2 1.07 11.66 18.50 4.39 0.65 746.25 1.69 5.849 27.825 4980.2 1.733 310.39 3.61 0.643 4.8 25.812 2.65045 202.265 22674.73 43.889 2.77 19.643 87.2415 3.02 2.02459 4091 1.22217 1498 2.66049 87.953 22844.09 63997 4175.3 1.2855 71350 990 0.16537 4339 0.55187 351.708 22852.53 1482 2332 0.17 0.33 0.33 OpenBenchmarking.org
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 a b 0.722 1.444 2.166 2.888 3.61 3.209 2.748 MIN: 2.56 / MAX: 80.33 MIN: 2.72 / MAX: 3.3 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP16MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV2 a b 600 1200 1800 2400 3000 2632 2254 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Small a b 200 400 600 800 1000 734 793 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Small a b 200 400 600 800 1000 919 967 1. (CXX) g++ options: -O3 -lrt -lm
LZ4 Compression Compression Level: 9 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 9 - Compression Speed a b 10 20 30 40 50 42.11 44.02 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 3 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 3 - Compression Speed a b 30 60 90 120 150 108.95 113.08 1. (CC) gcc options: -O3 -pthread
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 a b 5 10 15 20 25 19.90 20.63 MIN: 19.49 / MAX: 131.35 MIN: 19.56 / MAX: 188.35 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: TopTweet a b 1.0598 2.1196 3.1794 4.2392 5.299 4.56 4.71 1. (CXX) g++ options: -O3 -lrt
LZ4 Compression Compression Level: 9 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 9 - Decompression Speed a b 1000 2000 3000 4000 5000 4581.9 4731.3 1. (CC) gcc options: -O3 -pthread
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: nasnet a b 3 6 9 12 15 12.63 13.04 MIN: 12.53 / MAX: 13.51 MIN: 12.18 / MAX: 181.23 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 a b 0.6989 1.3978 2.0967 2.7956 3.4945 3.106 3.017 MIN: 3.08 / MAX: 3.26 MIN: 2.99 / MAX: 3.64 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Stockfish Chess Benchmark OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish Chess Benchmark a b 4M 8M 12M 16M 20M 18872171 18399415 1. Stockfish 15.1 by the Stockfish developers (see AUTHORS file)
LZ4 Compression Compression Level: 3 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 3 - Decompression Speed a b 1000 2000 3000 4000 5000 4374.8 4485.8 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 12 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 12 - Compression Speed a b 4 8 12 16 20 14.38 14.04 1. (CC) gcc options: -O3 -pthread
Z3 Theorem Prover SMT File: 2.smt2 OpenBenchmarking.org Seconds, Fewer Is Better Z3 Theorem Prover 4.12.1 SMT File: 2.smt2 a b 15 30 45 60 75 65.96 67.50 1. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC
LZ4 Compression Compression Level: 12 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 12 - Decompression Speed a b 1000 2000 3000 4000 5000 4733.6 4644.2 1. (CC) gcc options: -O3 -pthread
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: LargeRandom a b 0.2453 0.4906 0.7359 0.9812 1.2265 1.09 1.07 1. (CXX) g++ options: -O3 -lrt
WebP Image Encode Encode Settings: Quality 100 OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.4 Encode Settings: Quality 100 a b 3 6 9 12 15 11.45 11.66 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
WebP Image Encode Encode Settings: Default OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.4 Encode Settings: Default a b 5 10 15 20 25 18.82 18.50 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: PartialTweets a b 0.9878 1.9756 2.9634 3.9512 4.939 4.32 4.39 1. (CXX) g++ options: -O3 -lrt
WebP Image Encode Encode Settings: Quality 100, Lossless, Highest Compression OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.4 Encode Settings: Quality 100, Lossless, Highest Compression a b 0.1485 0.297 0.4455 0.594 0.7425 0.66 0.65 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
LZ4 Compression Compression Level: 1 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 1 - Compression Speed a b 160 320 480 640 800 736.83 746.25 1. (CC) gcc options: -O3 -pthread
WebP Image Encode Encode Settings: Quality 100, Lossless OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.4 Encode Settings: Quality 100, Lossless a b 0.3803 0.7606 1.1409 1.5212 1.9015 1.67 1.69 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 a b 1.316 2.632 3.948 5.264 6.58 5.780 5.849 MIN: 5.71 / MAX: 6.21 MIN: 5.79 / MAX: 6.32 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Z3 Theorem Prover SMT File: 1.smt2 OpenBenchmarking.org Seconds, Fewer Is Better Z3 Theorem Prover 4.12.1 SMT File: 1.smt2 a b 7 14 21 28 35 28.11 27.83 1. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC
LZ4 Compression Compression Level: 1 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 1 - Decompression Speed a b 1100 2200 3300 4400 5500 4930.1 4980.2 1. (CC) gcc options: -O3 -pthread
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 a b 0.3899 0.7798 1.1697 1.5596 1.9495 1.717 1.733 MIN: 1.69 / MAX: 1.9 MIN: 1.71 / MAX: 1.91 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
LZ4 Compression Compression Level: 2 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 2 - Compression Speed a b 70 140 210 280 350 307.54 310.39 1. (CC) gcc options: -O3 -pthread
WebP Image Encode Encode Settings: Quality 100, Highest Compression OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.4 Encode Settings: Quality 100, Highest Compression a b 0.8123 1.6246 2.4369 3.2492 4.0615 3.58 3.61 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
GROMACS Input: water_GMX50_bare OpenBenchmarking.org Ns Per Day, More Is Better GROMACS Input: water_GMX50_bare a b 0.1458 0.2916 0.4374 0.5832 0.729 0.648 0.643 1. GROMACS version: 2022.5-Debian_2022.5_2
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: DistinctUserID a b 1.08 2.16 3.24 4.32 5.4 4.77 4.80 1. (CXX) g++ options: -O3 -lrt
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: inception-v3 a b 6 12 18 24 30 25.97 25.81 MIN: 25.52 / MAX: 195.13 MIN: 25.64 / MAX: 53.36 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
OSPRay Benchmark: particle_volume/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/ao/real_time a b 0.6 1.2 1.8 2.4 3 2.66660 2.65045
Build2 Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.17 Time To Compile a b 40 80 120 160 200 201.12 202.27
Apache Siege Concurrent Users: 1000 OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 1000 a b 5K 10K 15K 20K 25K 22800.43 22674.73 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 1B a b 10 20 30 40 50 43.65 43.89
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 a b 0.6233 1.2466 1.8699 2.4932 3.1165 2.755 2.770 MIN: 2.72 / MAX: 3.38 MIN: 2.74 / MAX: 3.04 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 500M a b 5 10 15 20 25 19.74 19.64
OSPRay Benchmark: particle_volume/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/pathtracer/real_time a b 20 40 60 80 100 86.93 87.24
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: Kostya a b 0.6818 1.3636 2.0454 2.7272 3.409 3.03 3.02 1. (CXX) g++ options: -O3 -lrt
OSPRay Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time a b 0.4555 0.911 1.3665 1.822 2.2775 2.01843 2.02459
XNNPACK Model: FP32MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV2 a b 900 1800 2700 3600 4500 4103 4091 1. (CXX) g++ options: -O3 -lrt -lm
OSPRay Benchmark: gravity_spheres_volume/dim_512/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time a b 0.2758 0.5516 0.8274 1.1032 1.379 1.22574 1.22217
XNNPACK Model: QU8MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Large a b 300 600 900 1200 1500 1494 1498 1. (CXX) g++ options: -O3 -lrt -lm
OSPRay Benchmark: particle_volume/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/scivis/real_time a b 0.5986 1.1972 1.7958 2.3944 2.993 2.65461 2.66049
C-Ray Resolution: 1080p - Rays Per Pixel: 16 OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 2.0 Resolution: 1080p - Rays Per Pixel: 16 a b 20 40 60 80 100 88.12 87.95 1. (CC) gcc options: -lpthread -lm
Apache Siege Concurrent Users: 200 OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 200 a b 5K 10K 15K 20K 25K 22884.61 22844.09 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 24.05 Test: Decompression Rating a b 14K 28K 42K 56K 70K 63907 63997 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
LZ4 Compression Compression Level: 2 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 2 - Decompression Speed a b 900 1800 2700 3600 4500 4180.5 4175.3 1. (CC) gcc options: -O3 -pthread
OSPRay Benchmark: gravity_spheres_volume/dim_512/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/ao/real_time a b 0.2892 0.5784 0.8676 1.1568 1.446 1.28405 1.28550
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 24.05 Test: Compression Rating a b 15K 30K 45K 60K 75K 71271 71350 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
XNNPACK Model: FP32MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Small a b 200 400 600 800 1000 991 990 1. (CXX) g++ options: -O3 -lrt -lm
NAMD Input: STMV with 1,066,628 Atoms OpenBenchmarking.org ns/day, More Is Better NAMD 3.0b6 Input: STMV with 1,066,628 Atoms a b 0.0372 0.0744 0.1116 0.1488 0.186 0.16553 0.16537
XNNPACK Model: FP32MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Large a b 900 1800 2700 3600 4500 4336 4339 1. (CXX) g++ options: -O3 -lrt -lm
NAMD Input: ATPase with 327,506 Atoms OpenBenchmarking.org ns/day, More Is Better NAMD 3.0b6 Input: ATPase with 327,506 Atoms a b 0.1242 0.2484 0.3726 0.4968 0.621 0.55160 0.55187
C-Ray Resolution: 4K - Rays Per Pixel: 16 OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 2.0 Resolution: 4K - Rays Per Pixel: 16 a b 80 160 240 320 400 351.82 351.71 1. (CC) gcc options: -lpthread -lm
Apache Siege Concurrent Users: 500 OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 500 a b 5K 10K 15K 20K 25K 22852.53 22852.53 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
XNNPACK Model: QU8MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV2 a b 300 600 900 1200 1500 1482 1482 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Large a b 500 1000 1500 2000 2500 2332 2332 1. (CXX) g++ options: -O3 -lrt -lm
Intel Open Image Denoise Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.3 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only a b 0.0383 0.0766 0.1149 0.1532 0.1915 0.17 0.17
Intel Open Image Denoise Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.3 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only a b 0.0743 0.1486 0.2229 0.2972 0.3715 0.33 0.33
Intel Open Image Denoise Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.3 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only a b 0.0743 0.1486 0.2229 0.2972 0.3715 0.33 0.33
Phoronix Test Suite v10.8.5