aiggy AMD Ryzen 7 3800XT 8-Core testing with a MSI X370 XPOWER GAMING TITANIUM (MS-7A31) v1.0 (1.MS BIOS) and Sapphire AMD Radeon HD 4650 on Debian 12 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2408155-NE-AIGGY960267&grt .
aiggy Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Display Server Compiler File-System Screen Resolution a b AMD Ryzen 7 3800XT 8-Core @ 3.90GHz (8 Cores / 16 Threads) MSI X370 XPOWER GAMING TITANIUM (MS-7A31) v1.0 (1.MS BIOS) AMD Starship/Matisse 2 x 8GB DDR4-3200MT/s CL16-18-18 D4-3200 128GB INTEL SSDPEKKW128G7 Sapphire AMD Radeon HD 4650 AMD RV710/730 Intel I211 Debian 12 6.1.0-22-amd64 (x86_64) X Server 1.20.11 GCC 12.2.0 ext4 1024x768 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: always Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701021 Python Details - Python 3.11.2 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
aiggy compress-7zip: Compression Rating compress-7zip: Decompression Rating apache-siege: 200 apache-siege: 500 apache-siege: 1000 build2: Time To Compile c-ray: 4K - 16 c-ray: 1080p - 16 gromacs: water_GMX50_bare oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only oidn: RTLightmap.hdr.4096x4096 - CPU-Only compress-lz4: 1 - Compression Speed compress-lz4: 1 - Decompression Speed compress-lz4: 2 - Compression Speed compress-lz4: 2 - Decompression Speed compress-lz4: 3 - Compression Speed compress-lz4: 3 - Decompression Speed compress-lz4: 9 - Compression Speed compress-lz4: 9 - Decompression Speed compress-lz4: 12 - Compression Speed compress-lz4: 12 - Decompression Speed mnn: nasnet mnn: mobilenetV3 mnn: squeezenetv1.1 mnn: resnet-v2-50 mnn: SqueezeNetV1.0 mnn: MobileNetV2_224 mnn: mobilenet-v1-1.0 mnn: inception-v3 namd: ATPase with 327,506 Atoms namd: STMV with 1,066,628 Atoms ospray: particle_volume/ao/real_time ospray: particle_volume/scivis/real_time ospray: particle_volume/pathtracer/real_time ospray: gravity_spheres_volume/dim_512/ao/real_time ospray: gravity_spheres_volume/dim_512/scivis/real_time ospray: gravity_spheres_volume/dim_512/pathtracer/real_time simdjson: Kostya simdjson: TopTweet simdjson: LargeRand simdjson: PartialTweets simdjson: DistinctUserID stockfish: Chess Benchmark webp: Default webp: Quality 100 webp: Quality 100, Lossless webp: Quality 100, Highest Compression webp: Quality 100, Lossless, Highest Compression xnnpack: FP32MobileNetV2 xnnpack: FP32MobileNetV3Large xnnpack: FP32MobileNetV3Small xnnpack: FP16MobileNetV2 xnnpack: FP16MobileNetV3Large xnnpack: FP16MobileNetV3Small xnnpack: QU8MobileNetV2 xnnpack: QU8MobileNetV3Large xnnpack: QU8MobileNetV3Small y-cruncher: 1B y-cruncher: 500M z3: 1.smt2 z3: 2.smt2 a b 71271 63907 22884.61 22852.53 22800.43 201.124 351.823 88.115 0.648 0.33 0.33 0.17 736.83 4930.1 307.54 4180.5 108.95 4374.8 42.11 4581.9 14.38 4733.6 12.629 1.717 3.209 19.904 5.78 3.106 2.755 25.97 0.55160 0.16553 2.6666 2.65461 86.9299 1.28405 1.22574 2.01843 3.03 4.56 1.09 4.32 4.77 18872171 18.82 11.45 1.67 3.58 0.66 4103 4336 991 2632 2332 919 1482 1494 734 43.649 19.74 28.113 65.959 71350 63997 22844.09 22852.53 22674.73 202.265 351.708 87.953 0.643 0.33 0.33 0.17 746.25 4980.2 310.39 4175.3 113.08 4485.8 44.02 4731.3 14.04 4644.2 13.037 1.733 2.748 20.631 5.849 3.017 2.77 25.812 0.55187 0.16537 2.65045 2.66049 87.2415 1.2855 1.22217 2.02459 3.02 4.71 1.07 4.39 4.8 18399415 18.50 11.66 1.69 3.61 0.65 4091 4339 990 2254 2332 967 1482 1498 793 43.889 19.643 27.825 67.5 OpenBenchmarking.org
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 24.05 Test: Compression Rating a b 15K 30K 45K 60K 75K 71271 71350 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 24.05 Test: Decompression Rating a b 14K 28K 42K 56K 70K 63907 63997 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Apache Siege Concurrent Users: 200 OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 200 a b 5K 10K 15K 20K 25K 22884.61 22844.09 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
Apache Siege Concurrent Users: 500 OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 500 a b 5K 10K 15K 20K 25K 22852.53 22852.53 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
Apache Siege Concurrent Users: 1000 OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 1000 a b 5K 10K 15K 20K 25K 22800.43 22674.73 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
Build2 Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.17 Time To Compile a b 40 80 120 160 200 201.12 202.27
C-Ray Resolution: 4K - Rays Per Pixel: 16 OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 2.0 Resolution: 4K - Rays Per Pixel: 16 a b 80 160 240 320 400 351.82 351.71 1. (CC) gcc options: -lpthread -lm
C-Ray Resolution: 1080p - Rays Per Pixel: 16 OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 2.0 Resolution: 1080p - Rays Per Pixel: 16 a b 20 40 60 80 100 88.12 87.95 1. (CC) gcc options: -lpthread -lm
GROMACS Input: water_GMX50_bare OpenBenchmarking.org Ns Per Day, More Is Better GROMACS Input: water_GMX50_bare a b 0.1458 0.2916 0.4374 0.5832 0.729 0.648 0.643 1. GROMACS version: 2022.5-Debian_2022.5_2
Intel Open Image Denoise Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.3 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only a b 0.0743 0.1486 0.2229 0.2972 0.3715 0.33 0.33
Intel Open Image Denoise Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.3 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only a b 0.0743 0.1486 0.2229 0.2972 0.3715 0.33 0.33
Intel Open Image Denoise Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.3 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only a b 0.0383 0.0766 0.1149 0.1532 0.1915 0.17 0.17
LZ4 Compression Compression Level: 1 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 1 - Compression Speed a b 160 320 480 640 800 736.83 746.25 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 1 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 1 - Decompression Speed a b 1100 2200 3300 4400 5500 4930.1 4980.2 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 2 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 2 - Compression Speed a b 70 140 210 280 350 307.54 310.39 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 2 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 2 - Decompression Speed a b 900 1800 2700 3600 4500 4180.5 4175.3 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 3 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 3 - Compression Speed a b 30 60 90 120 150 108.95 113.08 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 3 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 3 - Decompression Speed a b 1000 2000 3000 4000 5000 4374.8 4485.8 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 9 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 9 - Compression Speed a b 10 20 30 40 50 42.11 44.02 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 9 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 9 - Decompression Speed a b 1000 2000 3000 4000 5000 4581.9 4731.3 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 12 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 12 - Compression Speed a b 4 8 12 16 20 14.38 14.04 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 12 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 12 - Decompression Speed a b 1000 2000 3000 4000 5000 4733.6 4644.2 1. (CC) gcc options: -O3 -pthread
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: nasnet a b 3 6 9 12 15 12.63 13.04 MIN: 12.53 / MAX: 13.51 MIN: 12.18 / MAX: 181.23 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 a b 0.3899 0.7798 1.1697 1.5596 1.9495 1.717 1.733 MIN: 1.69 / MAX: 1.9 MIN: 1.71 / MAX: 1.91 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 a b 0.722 1.444 2.166 2.888 3.61 3.209 2.748 MIN: 2.56 / MAX: 80.33 MIN: 2.72 / MAX: 3.3 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 a b 5 10 15 20 25 19.90 20.63 MIN: 19.49 / MAX: 131.35 MIN: 19.56 / MAX: 188.35 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 a b 1.316 2.632 3.948 5.264 6.58 5.780 5.849 MIN: 5.71 / MAX: 6.21 MIN: 5.79 / MAX: 6.32 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 a b 0.6989 1.3978 2.0967 2.7956 3.4945 3.106 3.017 MIN: 3.08 / MAX: 3.26 MIN: 2.99 / MAX: 3.64 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 a b 0.6233 1.2466 1.8699 2.4932 3.1165 2.755 2.770 MIN: 2.72 / MAX: 3.38 MIN: 2.74 / MAX: 3.04 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: inception-v3 a b 6 12 18 24 30 25.97 25.81 MIN: 25.52 / MAX: 195.13 MIN: 25.64 / MAX: 53.36 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
NAMD Input: ATPase with 327,506 Atoms OpenBenchmarking.org ns/day, More Is Better NAMD 3.0b6 Input: ATPase with 327,506 Atoms a b 0.1242 0.2484 0.3726 0.4968 0.621 0.55160 0.55187
NAMD Input: STMV with 1,066,628 Atoms OpenBenchmarking.org ns/day, More Is Better NAMD 3.0b6 Input: STMV with 1,066,628 Atoms a b 0.0372 0.0744 0.1116 0.1488 0.186 0.16553 0.16537
OSPRay Benchmark: particle_volume/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/ao/real_time a b 0.6 1.2 1.8 2.4 3 2.66660 2.65045
OSPRay Benchmark: particle_volume/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/scivis/real_time a b 0.5986 1.1972 1.7958 2.3944 2.993 2.65461 2.66049
OSPRay Benchmark: particle_volume/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/pathtracer/real_time a b 20 40 60 80 100 86.93 87.24
OSPRay Benchmark: gravity_spheres_volume/dim_512/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/ao/real_time a b 0.2892 0.5784 0.8676 1.1568 1.446 1.28405 1.28550
OSPRay Benchmark: gravity_spheres_volume/dim_512/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time a b 0.2758 0.5516 0.8274 1.1032 1.379 1.22574 1.22217
OSPRay Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time a b 0.4555 0.911 1.3665 1.822 2.2775 2.01843 2.02459
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: Kostya a b 0.6818 1.3636 2.0454 2.7272 3.409 3.03 3.02 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: TopTweet a b 1.0598 2.1196 3.1794 4.2392 5.299 4.56 4.71 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: LargeRandom a b 0.2453 0.4906 0.7359 0.9812 1.2265 1.09 1.07 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: PartialTweets a b 0.9878 1.9756 2.9634 3.9512 4.939 4.32 4.39 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: DistinctUserID a b 1.08 2.16 3.24 4.32 5.4 4.77 4.80 1. (CXX) g++ options: -O3 -lrt
Stockfish Chess Benchmark OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish Chess Benchmark a b 4M 8M 12M 16M 20M 18872171 18399415 1. Stockfish 15.1 by the Stockfish developers (see AUTHORS file)
WebP Image Encode Encode Settings: Default OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.4 Encode Settings: Default a b 5 10 15 20 25 18.82 18.50 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
WebP Image Encode Encode Settings: Quality 100 OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.4 Encode Settings: Quality 100 a b 3 6 9 12 15 11.45 11.66 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
WebP Image Encode Encode Settings: Quality 100, Lossless OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.4 Encode Settings: Quality 100, Lossless a b 0.3803 0.7606 1.1409 1.5212 1.9015 1.67 1.69 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
WebP Image Encode Encode Settings: Quality 100, Highest Compression OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.4 Encode Settings: Quality 100, Highest Compression a b 0.8123 1.6246 2.4369 3.2492 4.0615 3.58 3.61 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
WebP Image Encode Encode Settings: Quality 100, Lossless, Highest Compression OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.4 Encode Settings: Quality 100, Lossless, Highest Compression a b 0.1485 0.297 0.4455 0.594 0.7425 0.66 0.65 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
XNNPACK Model: FP32MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV2 a b 900 1800 2700 3600 4500 4103 4091 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Large a b 900 1800 2700 3600 4500 4336 4339 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Small a b 200 400 600 800 1000 991 990 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV2 a b 600 1200 1800 2400 3000 2632 2254 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Large a b 500 1000 1500 2000 2500 2332 2332 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Small a b 200 400 600 800 1000 919 967 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV2 a b 300 600 900 1200 1500 1482 1482 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Large a b 300 600 900 1200 1500 1494 1498 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Small a b 200 400 600 800 1000 734 793 1. (CXX) g++ options: -O3 -lrt -lm
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 1B a b 10 20 30 40 50 43.65 43.89
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 500M a b 5 10 15 20 25 19.74 19.64
Z3 Theorem Prover SMT File: 1.smt2 OpenBenchmarking.org Seconds, Fewer Is Better Z3 Theorem Prover 4.12.1 SMT File: 1.smt2 a b 7 14 21 28 35 28.11 27.83 1. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC
Z3 Theorem Prover SMT File: 2.smt2 OpenBenchmarking.org Seconds, Fewer Is Better Z3 Theorem Prover 4.12.1 SMT File: 2.smt2 a b 15 30 45 60 75 65.96 67.50 1. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC
Phoronix Test Suite v10.8.5