threadripper eo 2022 AMD Ryzen Threadripper 3960X 24-Core testing with a MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS) and Gigabyte AMD Radeon RX 5500 XT 8GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2212279-NE-THREADRIP40&sor&grr .
threadripper eo 2022 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution a b AMD Ryzen Threadripper 3960X 24-Core @ 3.80GHz (24 Cores / 48 Threads) MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS) AMD Starship/Matisse 32GB 1000GB Sabrent Rocket 4.0 1TB Gigabyte AMD Radeon RX 5500 XT 8GB (1900/875MHz) AMD Navi 10 HDMI Audio VA2431 Aquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200 Ubuntu 22.04 5.19.0-051900rc7-generic (x86_64) GNOME Shell 42.2 X Server 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47) 1.3.204 GCC 11.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301025 Graphics Details - BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: xxx-xxx-xxx Java Details - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) Python Details - Python 3.10.4 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
threadripper eo 2022 brl-cad: VGR Performance Metric blender: Barbershop - CPU-Only build-linux-kernel: allmodconfig smhasher: SHA3-256 smhasher: SHA3-256 openvkl: vklBenchmark ISPC openvkl: vklBenchmark Scalar nekrs: TurboPipe Periodic jpegxl: JPEG - 100 jpegxl: PNG - 100 openradioss: INIVOL and Fluid Structure Interaction Drop Container ffmpeg: libx265 - Platform ffmpeg: libx265 - Platform ffmpeg: libx265 - Video On Demand ffmpeg: libx265 - Video On Demand ffmpeg: libx264 - Upload ffmpeg: libx264 - Upload build-python: Released Build, PGO + LTO Optimized ffmpeg: libx265 - Upload ffmpeg: libx265 - Upload build-nodejs: Time To Compile ffmpeg: libx264 - Platform ffmpeg: libx264 - Platform ffmpeg: libx264 - Video On Demand ffmpeg: libx264 - Video On Demand blender: Pabellon Barcelona - CPU-Only openradioss: Bird Strike on Windshield mnn: inception-v3 mnn: mobilenet-v1-1.0 mnn: MobileNetV2_224 mnn: SqueezeNetV1.0 mnn: resnet-v2-50 mnn: squeezenetv1.1 mnn: mobilenetV3 mnn: nasnet webp2: Quality 95, Compression Effort 7 scikit-learn: Sparse Rand Projections, 100 Iterations openfoam: drivaerFastback, Small Mesh Size - Execution Time openfoam: drivaerFastback, Small Mesh Size - Mesh Time blender: Classroom - CPU-Only pgbench: 100 - 50 - Read Only - Average Latency pgbench: 100 - 50 - Read Only pgbench: 100 - 250 - Read Only - Average Latency pgbench: 100 - 250 - Read Only pgbench: 100 - 250 - Read Write - Average Latency pgbench: 100 - 250 - Read Write pgbench: 100 - 100 - Read Write - Average Latency pgbench: 100 - 100 - Read Write pgbench: 100 - 50 - Read Write - Average Latency pgbench: 100 - 50 - Read Write pgbench: 100 - 100 - Read Only - Average Latency pgbench: 100 - 100 - Read Only jpegxl: JPEG - 80 jpegxl: PNG - 80 ncnn: CPU - FastestDet ncnn: CPU - vision_transformer ncnn: CPU - regnety_400m ncnn: CPU - squeezenet_ssd ncnn: CPU - yolov4-tiny ncnn: CPU - resnet50 ncnn: CPU - alexnet ncnn: CPU - resnet18 ncnn: CPU - vgg16 ncnn: CPU - googlenet ncnn: CPU - blazeface ncnn: CPU - efficientnet-b0 ncnn: CPU - mnasnet ncnn: CPU - shufflenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU - mobilenet stargate: 192000 - 512 numenta-nab: KNN CAD openfoam: motorBike - Execution Time openfoam: motorBike - Mesh Time stream: Copy jpegxl: JPEG - 90 jpegxl: PNG - 90 cockroach: KV, 10% Reads - 1024 cockroach: KV, 50% Reads - 1024 cockroach: KV, 60% Reads - 1024 cockroach: KV, 95% Reads - 1024 cockroach: KV, 10% Reads - 512 cockroach: KV, 60% Reads - 512 cockroach: KV, 95% Reads - 512 cockroach: KV, 50% Reads - 512 cockroach: KV, 50% Reads - 256 cockroach: KV, 10% Reads - 128 cockroach: KV, 60% Reads - 256 cockroach: KV, 10% Reads - 256 cockroach: KV, 95% Reads - 256 cockroach: KV, 60% Reads - 128 cockroach: KV, 50% Reads - 128 cockroach: KV, 95% Reads - 128 scikit-learn: MNIST Dataset openradioss: Bumper Beam aom-av1: Speed 4 Two-Pass - Bosphorus 4K cockroach: MoVR - 256 cockroach: MoVR - 128 cockroach: MoVR - 512 cockroach: MoVR - 1024 spark: 1000000 - 100 - Broadcast Inner Join Test Time spark: 1000000 - 100 - Inner Join Test Time spark: 1000000 - 100 - Repartition Test Time spark: 1000000 - 100 - Group By Test Time spark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 100 - Calculate Pi Benchmark spark: 1000000 - 100 - SHA-512 Benchmark Time ffmpeg: libx265 - Live ffmpeg: libx265 - Live avifenc: 0 nginx: 1000 build-erlang: Time To Compile nginx: 500 nginx: 200 nginx: 100 stargate: 96000 - 512 onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU openradioss: Rubber O-Ring Seal Installation onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU webp2: Quality 75, Compression Effort 7 onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU numenta-nab: Earthgecko Skyline clickhouse: 100M Rows Web Analytics Dataset, Third Run clickhouse: 100M Rows Web Analytics Dataset, Second Run clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cache stargate: 192000 - 1024 deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream blender: Fishy Cat - CPU-Only deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream dragonflydb: 200 - 1:5 dragonflydb: 200 - 5:1 dragonflydb: 200 - 1:1 openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU dragonflydb: 50 - 1:1 dragonflydb: 50 - 5:1 dragonflydb: 50 - 1:5 openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU xmrig: Monero - 1M openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openradioss: Cell Phone Drop Test deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream rocksdb: Rand Fill Sync openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU spacy: en_core_web_trf spacy: en_core_web_lg openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU rocksdb: Rand Fill rocksdb: Update Rand rocksdb: Read Rand Write Rand graphics-magick: Resizing graphics-magick: Enhanced graphics-magick: Sharpen graphics-magick: Rotate rocksdb: Read While Writing rocksdb: Rand Read graphics-magick: Swirl graphics-magick: Noise-Gaussian graphics-magick: HWB Color Space aom-av1: Speed 0 Two-Pass - Bosphorus 4K jpegxl-decode: 1 aom-av1: Speed 6 Two-Pass - Bosphorus 4K blender: BMW27 - CPU-Only deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream rocksdb: Seq Fill stargate: 44100 - 512 aom-av1: Speed 4 Two-Pass - Bosphorus 1080p deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream stargate: 480000 - 512 svt-av1: Preset 4 - Bosphorus 4K xmrig: Wownero - 1M avifenc: 2 build-wasmer: Time To Compile stargate: 96000 - 1024 ffmpeg: libx264 - Live ffmpeg: libx264 - Live build-linux-kernel: defconfig build-php: Time To Compile tensorflow: CPU - 16 - GoogLeNet deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream encodec: 24 kbps deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream webp: Quality 100, Lossless, Highest Compression numenta-nab: Contextual Anomaly Detector OSE deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM encodec: 3 kbps encodec: 6 kbps deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream unvanquished: 1920 x 1080 - Ultra unvanquished: 1920 x 1080 - High encodec: 1.5 kbps redis: SET - 50 astcenc: Exhaustive redis: SET - 500 unvanquished: 1920 x 1080 - Medium stargate: 480000 - 1024 srsran: OFDM_Test srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM stargate: 44100 - 1024 scikit-learn: TSNE MNIST Dataset tensorflow: CPU - 16 - AlexNet compress-7zip: Decompression Rating compress-7zip: Compression Rating stress-ng: Glibc C String Functions stress-ng: Context Switching stress-ng: Atomic stress-ng: Futex stress-ng: Memory Copying stress-ng: IO_uring stress-ng: Malloc stress-ng: MEMFD stress-ng: CPU Stress stress-ng: Forking stress-ng: NUMA stress-ng: System V Message Passing stress-ng: Semaphores stress-ng: MMAP stress-ng: Glibc Qsort Data Sorting stress-ng: Vector Math stress-ng: CPU Cache stress-ng: Socket Activity stress-ng: Crypto stress-ng: Mutex stress-ng: Matrix Math stress-ng: SENDFILE redis: GET - 500 redis: GET - 50 natron: Spaceship rav1e: 1 y-cruncher: 1B aom-av1: Speed 6 Realtime - Bosphorus 4K jpegxl-decode: All aom-av1: Speed 6 Two-Pass - Bosphorus 1080p srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM rav1e: 5 svt-av1: Preset 4 - Bosphorus 1080p aom-av1: Speed 0 Two-Pass - Bosphorus 1080p onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU astcenc: Thorough numenta-nab: Bayesian Changepoint aom-av1: Speed 8 Realtime - Bosphorus 4K srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM encode-flac: WAV To FLAC rav1e: 6 webp: Quality 100, Lossless aom-av1: Speed 10 Realtime - Bosphorus 4K aom-av1: Speed 9 Realtime - Bosphorus 4K build-python: Default srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU aom-av1: Speed 6 Realtime - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 4K y-cruncher: 500M numenta-nab: Relative Entropy smhasher: FarmHash128 smhasher: FarmHash128 rav1e: 10 astcenc: Fast onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU smhasher: MeowHash x86_64 AES-NI smhasher: MeowHash x86_64 AES-NI smhasher: Spooky32 smhasher: Spooky32 aom-av1: Speed 8 Realtime - Bosphorus 1080p smhasher: FarmHash32 x86_64 AVX smhasher: FarmHash32 x86_64 AVX onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU smhasher: fasthash32 smhasher: fasthash32 aom-av1: Speed 9 Realtime - Bosphorus 1080p aom-av1: Speed 10 Realtime - Bosphorus 1080p smhasher: t1ha2_atonce smhasher: t1ha2_atonce smhasher: t1ha0_aes_avx2 x86_64 smhasher: t1ha0_aes_avx2 x86_64 astcenc: Medium avifenc: 6, Lossless webp: Quality 100, Highest Compression svt-av1: Preset 12 - Bosphorus 4K unpack-linux: linux-5.19.tar.xz smhasher: wyhash smhasher: wyhash svt-av1: Preset 8 - Bosphorus 1080p numenta-nab: Windowed Gaussian onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU svt-av1: Preset 13 - Bosphorus 4K avifenc: 10, Lossless avifenc: 6 onednn: Deconvolution Batch shapes_3d - f32 - CPU webp2: Quality 100, Compression Effort 5 onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU webp2: Default svt-av1: Preset 12 - Bosphorus 1080p webp: Quality 100 svt-av1: Preset 13 - Bosphorus 1080p webp: Default stream: Add stream: Triad stream: Scale a b 405453 583.81 476.667 2649.376 146.7 288 184 67648400000 0.66 0.68 287.95 28.62 264.66 28.63 264.63 12.37 204.11375323 257.57 13.73 183.908496366 232.345 47.13 160.71 47.17 160.60 176.4 173.48 23.254 3.054 4.361 6.491 21.564 4.311 2.282 14.311 0.15 162.422 121.45626 26.871733 148.83 0.074 671452 0.386 646832 5.506 45407 2.467 40535 1.712 29212 0.153 653213 8.59 8.95 8.91 133.48 26.61 21.24 25.55 21.43 8.95 12.14 33.71 18.44 3.67 9.26 6.29 7.89 6.44 6.92 15.69 1.244804 129.147 71.6006 40.3729 54649.6 8.51 8.89 59301.4 71952.1 76564 94157.8 61112 80024 98031.9 75643.3 77546.7 50479.2 82286.1 61211.1 100761.5 77055.3 70558.3 103073.5 103.896 98.86 7.91 476.5 477.5 476.6 474.9 1.28 1.67 1.76 4.67 4.20 68.960273121 3.40 69.22 72.95 91.039 125263.22 90.928 133262.41 148446.23 148017.08 1.911816 2604.63 78.18 2614.46 2559.09 1366.14 0.30 1295.49 1361.72 77.22 240.17 236.78 196.74 2.436915 172.6456 69.4812 69.44 608.4782 19.6428 3609156.11 3273370.07 3388128.93 2734.68 4.3 3341824.71 3193072.56 3519377.56 2750.06 4.33 15342.8 1735.28 6.82 611.8922 19.5601 1410.67 8.46 188.92 63.47 54.86 152.2314 78.7202 25136 17.47 686.15 1465 12454 16.75 715.98 18.11 661.93 28.43 843.69 31.53 380.28 1.06 22553.27 1.13 21060.95 811216 684330 2683538 2129 568 376 638 4506140 98049490 1178 514 1043 0.34 44.6 13.54 53.91 37.4433 26.7006 922415 3.892921 14.13 74.6969 13.3862 74.9151 13.3472 23.1605 43.1635 4.122029 3.5 21153.3 47.559 46.857 3.681535 198.58 25.43 43.944 43.61 44.45 77.4472 154.8854 42.075 111.3907 107.4771 11.9444 83.6707 0.61 38.983 53.9562 222.1611 160.2 402.5 38.187 37.604 16.6914 59.8698 8.8688 112.6594 248.6 257 36.013 1746939.25 1.5739 1784408 260 4.917282 140100000 150.6 376.6 5.223813 31.586 62.47 177640 167681 4187483.04 9618965.78 421253.99 3430545.94 4863.37 22416.27 50825608.57 891.86 63007.36 53674.14 604.44 9343696.19 4869829.14 383.32 385.8 178859.7 174.95 15355.92 44751.55 12785778.73 140597.25 414239.19 2271735.25 2481064.75 4.2 0.78 22.825 25.31 245.68 34.6 60.2 106.8 2.741 8.046 0.96 6.06539 1.68952 14.4059 20.148 31.97 175.5 413.1 17.802 3.656 1.50 40.29 40.37 15.429 162.9 376.6 1.50644 1.43493 43.08 56.689 11.143 13.21 63.803 15823.93 8.018 366.3281 4.48456 6.54191 57.723 37533.1 51.661 14254.7 60.72 43.284 27249.95 5.89944 0.652318 37.175 6591.27 72.59 72.52 34.763 15569.87 34.721 66642.16 120.3254 7.607 3.44 147.341 7.006 26.329 23068.13 111.927 6.199 9.16751 9.02473 145.752 5.153 4.012 2.63187 8.39 2.06629 9.6 400.811 11.10 359.56 17.87 36849.4 36787.1 33038 403757 586 478.887 2649.302 146.47 287 183 67381400000 0.66 0.68 290.2 28.68 264.16 28.74 263.53 12.30 205.24 257.945 13.89 181.78 233.722 46.96 161.32 47.04 161.04 177.27 174.41 25.51 3.021 4.607 6.547 22.828 4.191 2.455 14.962 0.15 162.588 123.71063 27.275142 149.1 0.075 663640 0.404 618774 9.786 25548 2.971 33658 1.705 29318 0.166 602288 8.7 9.02 8.9 137.59 26.65 22.44 26.29 23.05 9.45 12.78 34.14 19.69 3.75 9.56 6.54 8.14 6.62 7.13 16.39 1.314658 133.475 71.6125 40.5873 55314 8.58 8.99 59175.4 72222 76335.4 93822.1 60710 79220.4 97822.4 75289.1 76768.6 50108.8 82467 61278.6 100166.8 77363.2 69948.7 103111.7 104.931 100.6 7.88 480.7 483.1 481.1 480.5 1.34 1.63 1.84 4.78 4.21 69.93 3.59872799 69.15 73.03 90.561 126633 90.249 137072.73 148101.85 145523.99 1.947376 2772.01 79.08 2759.2 2719.77 1446.12 0.30 1451.46 1441.78 77.408 235.82 234.64 201.86 2.268748 177.2167 67.6453 69.59 610.7512 19.5842 3593257.69 3263145.89 3388656.73 2714.24 4.34 3320752.26 3164997.08 3515736.22 2734.21 4.31 15334.8 1690.53 7.02 610.9332 19.5237 1408.6 8.48 193.23 62.04 55.19 152.9447 78.4402 25297 17.32 692.14 1463 12319 16.84 712.18 18.36 653.16 28.4 844.82 31.82 376.7 1.06 22417.03 1.14 20837.59 800273 651892 2589021 2132 570 376 649 4452658 100100766 1193 535 1089 0.34 45.67 13.57 53.95 36.475 27.4093 876834 2.678564 14.23 75.0804 13.3177 75.0343 13.326 24.336 41.0795 2.722708 3.521 21018.6 47.639 46.955 3.845343 200.46 25.19 44.003 43.699 44.39 77.5099 154.7593 42.307 112.0146 106.8673 11.9559 83.5943 0.61 39.16 54.0192 221.8827 160.7 403.8 38.433 37.903 16.7848 59.5355 8.7988 113.5482 286.5 263.3 36.372 1590308.75 1.5693 1797429.5 262.8 5.196953 144000000 150.1 375.1 5.307245 31.926 63.15 170171 168680 4118591.13 9774697.61 421495.46 3166174.89 4839.31 17193.98 50473178.15 887.76 65071.9 52953.36 591.09 9276368.69 4868774.73 382.19 385.67 178430.01 186.95 15707.68 44653.67 12810594.01 138081.21 411184.4 2214977.25 2015000.12 4.2 0.778 22.916 24.61 251.91 34.15 60.2 107.1 2.748 7.877 0.97 6.16553 1.74252 14.3759 20.56 32.16 170 401.2 17.963 3.641 1.52 40.76 40.93 15.41 162.2 375.9 1.54322 1.47047 43.34 56.487 11.222 13.106 63.864 15708.02 7.997 364.3208 4.27034 4.7447 57.928 37302 51.627 14206.73 62.08 43.29 26861.97 5.91961 0.7778 37.264 6553.7 73.28 73.76 35.019 15465.65 35.155 65731.67 120.0841 7.571 3.41 129.515 7.096 26.387 22791.83 113.858 6.327 9.15733 9.42618 148.194 5.078 4.008 2.69706 8.43 2.09863 10.27 403.432 11.04 359.18 17.78 37116 37021.8 33173.5 OpenBenchmarking.org
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.32.6 VGR Performance Metric a b 90K 180K 270K 360K 450K 405453 403757 1. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Barbershop - Compute: CPU-Only a b 130 260 390 520 650 583.81 586.00
Timed Linux Kernel Compilation Build: allmodconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: allmodconfig a b 100 200 300 400 500 476.67 478.89
SMHasher Hash: SHA3-256 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: SHA3-256 b a 600 1200 1800 2400 3000 2649.30 2649.38 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: SHA3-256 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: SHA3-256 a b 30 60 90 120 150 146.70 146.47 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
OpenVKL Benchmark: vklBenchmark ISPC OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark ISPC a b 60 120 180 240 300 288 287 MIN: 37 / MAX: 3639 MIN: 37 / MAX: 3621
OpenVKL Benchmark: vklBenchmark Scalar OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark Scalar a b 40 80 120 160 200 184 183 MIN: 18 / MAX: 3346 MIN: 18 / MAX: 3314
nekRS Input: TurboPipe Periodic OpenBenchmarking.org FLOP/s, More Is Better nekRS 22.0 Input: TurboPipe Periodic a b 14000M 28000M 42000M 56000M 70000M 67648400000 67381400000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
JPEG XL libjxl Input: JPEG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 100 b a 0.1485 0.297 0.4455 0.594 0.7425 0.66 0.66 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: PNG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 100 b a 0.153 0.306 0.459 0.612 0.765 0.68 0.68 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenRadioss Model: INIVOL and Fluid Structure Interaction Drop Container OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: INIVOL and Fluid Structure Interaction Drop Container a b 60 120 180 240 300 287.95 290.20
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Platform b a 7 14 21 28 35 28.68 28.62 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Platform b a 60 120 180 240 300 264.16 264.66 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Video On Demand b a 7 14 21 28 35 28.74 28.63 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Video On Demand b a 60 120 180 240 300 263.53 264.63 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Upload a b 3 6 9 12 15 12.37 12.30 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Upload OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Upload a b 50 100 150 200 250 204.11 205.24 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Timed CPython Compilation Build Configuration: Released Build, PGO + LTO Optimized OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Released Build, PGO + LTO Optimized a b 60 120 180 240 300 257.57 257.95
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Upload b a 4 8 12 16 20 13.89 13.73 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Upload b a 40 80 120 160 200 181.78 183.91 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 18.8 Time To Compile a b 50 100 150 200 250 232.35 233.72
FFmpeg Encoder: libx264 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Platform a b 11 22 33 44 55 47.13 46.96 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Platform OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Platform a b 40 80 120 160 200 160.71 161.32 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Video On Demand a b 11 22 33 44 55 47.17 47.04 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Video On Demand OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Video On Demand a b 40 80 120 160 200 160.60 161.04 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Pabellon Barcelona - Compute: CPU-Only a b 40 80 120 160 200 176.40 177.27
OpenRadioss Model: Bird Strike on Windshield OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bird Strike on Windshield a b 40 80 120 160 200 173.48 174.41
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 a b 6 12 18 24 30 23.25 25.51 MIN: 23.03 / MAX: 23.59 MIN: 25.34 / MAX: 25.86 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 b a 0.6872 1.3744 2.0616 2.7488 3.436 3.021 3.054 MIN: 2.99 / MAX: 3.17 MIN: 3.02 / MAX: 3.11 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 a b 1.0366 2.0732 3.1098 4.1464 5.183 4.361 4.607 MIN: 4.33 / MAX: 4.48 MIN: 4.57 / MAX: 4.8 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 a b 2 4 6 8 10 6.491 6.547 MIN: 6.43 / MAX: 6.55 MIN: 6.49 / MAX: 6.68 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 a b 5 10 15 20 25 21.56 22.83 MIN: 21.35 / MAX: 21.84 MIN: 22.67 / MAX: 23.06 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 b a 0.97 1.94 2.91 3.88 4.85 4.191 4.311 MIN: 4.14 / MAX: 4.28 MIN: 4.26 / MAX: 4.37 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 a b 0.5524 1.1048 1.6572 2.2096 2.762 2.282 2.455 MIN: 2.24 / MAX: 2.33 MIN: 2.41 / MAX: 2.64 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet a b 4 8 12 16 20 14.31 14.96 MIN: 14.19 / MAX: 14.67 MIN: 14.82 / MAX: 15.24 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
WebP2 Image Encode Encode Settings: Quality 95, Compression Effort 7 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 95, Compression Effort 7 b a 0.0338 0.0676 0.1014 0.1352 0.169 0.15 0.15 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
Scikit-Learn Benchmark: Sparse Random Projections, 100 Iterations OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.1.3 Benchmark: Sparse Random Projections, 100 Iterations a b 40 80 120 160 200 162.42 162.59
OpenFOAM Input: drivaerFastback, Small Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time a b 30 60 90 120 150 121.46 123.71 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenFOAM Input: drivaerFastback, Small Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time a b 6 12 18 24 30 26.87 27.28 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Classroom - Compute: CPU-Only a b 30 60 90 120 150 148.83 149.10
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency a b 0.0169 0.0338 0.0507 0.0676 0.0845 0.074 0.075 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Only a b 140K 280K 420K 560K 700K 671452 663640 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency a b 0.0909 0.1818 0.2727 0.3636 0.4545 0.386 0.404 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Only a b 140K 280K 420K 560K 700K 646832 618774 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency a b 3 6 9 12 15 5.506 9.786 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Write a b 10K 20K 30K 40K 50K 45407 25548 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency a b 0.6685 1.337 2.0055 2.674 3.3425 2.467 2.971 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Write a b 9K 18K 27K 36K 45K 40535 33658 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency b a 0.3852 0.7704 1.1556 1.5408 1.926 1.705 1.712 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write b a 6K 12K 18K 24K 30K 29318 29212 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency a b 0.0374 0.0748 0.1122 0.1496 0.187 0.153 0.166 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Only a b 140K 280K 420K 560K 700K 653213 602288 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
JPEG XL libjxl Input: JPEG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 80 b a 2 4 6 8 10 8.70 8.59 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: PNG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 80 b a 3 6 9 12 15 9.02 8.95 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
NCNN Target: CPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet b a 2 4 6 8 10 8.90 8.91 MIN: 8.58 / MAX: 9.7 MIN: 8.78 / MAX: 10.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer a b 30 60 90 120 150 133.48 137.59 MIN: 133.04 / MAX: 138.19 MIN: 134.63 / MAX: 191.94 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m a b 6 12 18 24 30 26.61 26.65 MIN: 25.84 / MAX: 27.71 MIN: 26.08 / MAX: 28.34 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd a b 5 10 15 20 25 21.24 22.44 MIN: 19.92 / MAX: 23.46 MIN: 20.15 / MAX: 54.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny a b 6 12 18 24 30 25.55 26.29 MIN: 24.49 / MAX: 30.39 MIN: 24.78 / MAX: 29.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 a b 6 12 18 24 30 21.43 23.05 MIN: 20.84 / MAX: 22.74 MIN: 21.55 / MAX: 58.12 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet a b 3 6 9 12 15 8.95 9.45 MIN: 8.42 / MAX: 10.23 MIN: 8.49 / MAX: 10.69 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 a b 3 6 9 12 15 12.14 12.78 MIN: 11.54 / MAX: 13.22 MIN: 11.63 / MAX: 14.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 a b 8 16 24 32 40 33.71 34.14 MIN: 32.75 / MAX: 34.65 MIN: 32.66 / MAX: 42.06 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet a b 5 10 15 20 25 18.44 19.69 MIN: 17.65 / MAX: 19.84 MIN: 18.17 / MAX: 25.94 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface a b 0.8438 1.6876 2.5314 3.3752 4.219 3.67 3.75 MIN: 3.59 / MAX: 4.3 MIN: 3.54 / MAX: 4.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 a b 3 6 9 12 15 9.26 9.56 MIN: 9.16 / MAX: 9.93 MIN: 9.17 / MAX: 10.47 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet a b 2 4 6 8 10 6.29 6.54 MIN: 6.17 / MAX: 6.88 MIN: 6.2 / MAX: 7.42 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 a b 2 4 6 8 10 7.89 8.14 MIN: 7.72 / MAX: 8.67 MIN: 7.68 / MAX: 16.58 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 a b 2 4 6 8 10 6.44 6.62 MIN: 6.34 / MAX: 7.1 MIN: 6.24 / MAX: 7.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 a b 2 4 6 8 10 6.92 7.13 MIN: 6.79 / MAX: 7.59 MIN: 6.77 / MAX: 8.08 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mobilenet a b 4 8 12 16 20 15.69 16.39 MIN: 15.48 / MAX: 16.49 MIN: 15.57 / MAX: 24.36 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Stargate Digital Audio Workstation Sample Rate: 192000 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 192000 - Buffer Size: 512 b a 0.2958 0.5916 0.8874 1.1832 1.479 1.314658 1.244804 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
Numenta Anomaly Benchmark Detector: KNN CAD OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD a b 30 60 90 120 150 129.15 133.48
OpenFOAM Input: motorBike - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: motorBike - Execution Time a b 16 32 48 64 80 71.60 71.61 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenFOAM Input: motorBike - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: motorBike - Mesh Time a b 9 18 27 36 45 40.37 40.59 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
Stream Type: Copy OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Copy b a 12K 24K 36K 48K 60K 55314.0 54649.6 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
JPEG XL libjxl Input: JPEG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 90 b a 2 4 6 8 10 8.58 8.51 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: PNG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 90 b a 3 6 9 12 15 8.99 8.89 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
CockroachDB Workload: KV, 10% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 1024 a b 13K 26K 39K 52K 65K 59301.4 59175.4
CockroachDB Workload: KV, 50% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 1024 b a 15K 30K 45K 60K 75K 72222.0 71952.1
CockroachDB Workload: KV, 60% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 1024 a b 16K 32K 48K 64K 80K 76564.0 76335.4
CockroachDB Workload: KV, 95% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 1024 a b 20K 40K 60K 80K 100K 94157.8 93822.1
CockroachDB Workload: KV, 10% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 512 a b 13K 26K 39K 52K 65K 61112 60710
CockroachDB Workload: KV, 60% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 512 a b 20K 40K 60K 80K 100K 80024.0 79220.4
CockroachDB Workload: KV, 95% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 512 a b 20K 40K 60K 80K 100K 98031.9 97822.4
CockroachDB Workload: KV, 50% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 512 a b 16K 32K 48K 64K 80K 75643.3 75289.1
CockroachDB Workload: KV, 50% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 256 a b 17K 34K 51K 68K 85K 77546.7 76768.6
CockroachDB Workload: KV, 10% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 128 a b 11K 22K 33K 44K 55K 50479.2 50108.8
CockroachDB Workload: KV, 60% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 256 b a 20K 40K 60K 80K 100K 82467.0 82286.1
CockroachDB Workload: KV, 10% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 256 b a 13K 26K 39K 52K 65K 61278.6 61211.1
CockroachDB Workload: KV, 95% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 256 a b 20K 40K 60K 80K 100K 100761.5 100166.8
CockroachDB Workload: KV, 60% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 128 b a 17K 34K 51K 68K 85K 77363.2 77055.3
CockroachDB Workload: KV, 50% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 128 a b 15K 30K 45K 60K 75K 70558.3 69948.7
CockroachDB Workload: KV, 95% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 128 b a 20K 40K 60K 80K 100K 103111.7 103073.5
Scikit-Learn Benchmark: MNIST Dataset OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.1.3 Benchmark: MNIST Dataset a b 20 40 60 80 100 103.90 104.93
OpenRadioss Model: Bumper Beam OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam a b 20 40 60 80 100 98.86 100.60
AOM AV1 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K a b 2 4 6 8 10 7.91 7.88 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
CockroachDB Workload: MoVR - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 256 b a 100 200 300 400 500 480.7 476.5
CockroachDB Workload: MoVR - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 128 b a 100 200 300 400 500 483.1 477.5
CockroachDB Workload: MoVR - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 512 b a 100 200 300 400 500 481.1 476.6
CockroachDB Workload: MoVR - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 1024 b a 100 200 300 400 500 480.5 474.9
Apache Spark Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time a b 0.3015 0.603 0.9045 1.206 1.5075 1.28 1.34
Apache Spark Row Count: 1000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time b a 0.3758 0.7516 1.1274 1.5032 1.879 1.63 1.67
Apache Spark Row Count: 1000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time a b 0.414 0.828 1.242 1.656 2.07 1.76 1.84
Apache Spark Row Count: 1000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Group By Test Time a b 1.0755 2.151 3.2265 4.302 5.3775 4.67 4.78
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe a b 0.9473 1.8946 2.8419 3.7892 4.7365 4.20 4.21
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark a b 16 32 48 64 80 68.96 69.93
Apache Spark Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time a b 0.8097 1.6194 2.4291 3.2388 4.0485 3.40000000 3.59872799
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Live a b 15 30 45 60 75 69.22 69.15 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Live a b 16 32 48 64 80 72.95 73.03 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 0 b a 20 40 60 80 100 90.56 91.04 1. (CXX) g++ options: -O3 -fPIC -lm
nginx Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 1000 b a 30K 60K 90K 120K 150K 126633.00 125263.22 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Timed Erlang/OTP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Erlang/OTP Compilation 25.0 Time To Compile b a 20 40 60 80 100 90.25 90.93
nginx Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 500 b a 30K 60K 90K 120K 150K 137072.73 133262.41 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
nginx Connections: 200 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 200 a b 30K 60K 90K 120K 150K 148446.23 148101.85 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
nginx Connections: 100 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 100 a b 30K 60K 90K 120K 150K 148017.08 145523.99 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Stargate Digital Audio Workstation Sample Rate: 96000 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 96000 - Buffer Size: 512 b a 0.4382 0.8764 1.3146 1.7528 2.191 1.947376 1.911816 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU a b 600 1200 1800 2400 3000 2604.63 2772.01 MIN: 2588.48 MIN: 2730.23 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenRadioss Model: Rubber O-Ring Seal Installation OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Rubber O-Ring Seal Installation a b 20 40 60 80 100 78.18 79.08
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b 600 1200 1800 2400 3000 2614.46 2759.20 MIN: 2598.48 MIN: 2717.67 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU a b 600 1200 1800 2400 3000 2559.09 2719.77 MIN: 2544.08 MIN: 2677.09 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU a b 300 600 900 1200 1500 1366.14 1446.12 MIN: 1343.04 MIN: 1407.49 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
WebP2 Image Encode Encode Settings: Quality 75, Compression Effort 7 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 75, Compression Effort 7 b a 0.0675 0.135 0.2025 0.27 0.3375 0.30 0.30 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b 300 600 900 1200 1500 1295.49 1451.46 MIN: 1280.49 MIN: 1416.39 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU a b 300 600 900 1200 1500 1361.72 1441.78 MIN: 1338.9 MIN: 1407.45 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Numenta Anomaly Benchmark Detector: Earthgecko Skyline OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline a b 20 40 60 80 100 77.22 77.41
ClickHouse 100M Rows Web Analytics Dataset, Third Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Third Run a b 50 100 150 200 250 240.17 235.82 MIN: 17.07 / MAX: 6666.67 MIN: 17.11 / MAX: 3750 1. ClickHouse server version 22.5.4.19 (official build).
ClickHouse 100M Rows Web Analytics Dataset, Second Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Second Run a b 50 100 150 200 250 236.78 234.64 MIN: 16.92 / MAX: 12000 MIN: 17.06 / MAX: 12000 1. ClickHouse server version 22.5.4.19 (official build).
ClickHouse 100M Rows Web Analytics Dataset, First Run / Cold Cache OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, First Run / Cold Cache b a 40 80 120 160 200 201.86 196.74 MIN: 17.35 / MAX: 5454.55 MIN: 17.5 / MAX: 3529.41 1. ClickHouse server version 22.5.4.19 (official build).
Stargate Digital Audio Workstation Sample Rate: 192000 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 192000 - Buffer Size: 1024 a b 0.5483 1.0966 1.6449 2.1932 2.7415 2.436915 2.268748 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b 40 80 120 160 200 172.65 177.22
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b 15 30 45 60 75 69.48 67.65
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Fishy Cat - Compute: CPU-Only a b 15 30 45 60 75 69.44 69.59
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b 130 260 390 520 650 608.48 610.75
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b 5 10 15 20 25 19.64 19.58
Dragonflydb Clients: 200 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:5 a b 800K 1600K 2400K 3200K 4000K 3609156.11 3593257.69 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 5:1 a b 700K 1400K 2100K 2800K 3500K 3273370.07 3263145.89 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:1 b a 700K 1400K 2100K 2800K 3500K 3388656.73 3388128.93 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU b a 600 1200 1800 2400 3000 2714.24 2734.68 MIN: 1614.04 / MAX: 3194.85 MIN: 1593.76 / MAX: 3225.01 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU b a 0.9765 1.953 2.9295 3.906 4.8825 4.34 4.30 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Dragonflydb Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:1 a b 700K 1400K 2100K 2800K 3500K 3341824.71 3320752.26 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 5:1 a b 700K 1400K 2100K 2800K 3500K 3193072.56 3164997.08 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:5 a b 800K 1600K 2400K 3200K 4000K 3519377.56 3515736.22 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Detection FP32 - Device: CPU b a 600 1200 1800 2400 3000 2734.21 2750.06 MIN: 2202.17 / MAX: 3150.66 MIN: 2288.66 / MAX: 3108.15 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP32 - Device: CPU a b 0.9743 1.9486 2.9229 3.8972 4.8715 4.33 4.31 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Xmrig Variant: Monero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Monero - Hash Count: 1M a b 3K 6K 9K 12K 15K 15342.8 15334.8 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Face Detection FP16 - Device: CPU b a 400 800 1200 1600 2000 1690.53 1735.28 MIN: 1466.54 / MAX: 1987.53 MIN: 1511.37 / MAX: 1909.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16 - Device: CPU b a 2 4 6 8 10 7.02 6.82 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream b a 130 260 390 520 650 610.93 611.89
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b 5 10 15 20 25 19.56 19.52
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU b a 300 600 900 1200 1500 1408.60 1410.67 MIN: 1373.05 / MAX: 1468.03 MIN: 1372.29 / MAX: 1527.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU b a 2 4 6 8 10 8.48 8.46 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU a b 40 80 120 160 200 188.92 193.23 MIN: 155.98 / MAX: 237.74 MIN: 153.71 / MAX: 246.32 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU a b 14 28 42 56 70 63.47 62.04 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenRadioss Model: Cell Phone Drop Test OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Cell Phone Drop Test a b 12 24 36 48 60 54.86 55.19
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 152.23 152.94
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 78.72 78.44
Facebook RocksDB Test: Random Fill Sync OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill Sync b a 5K 10K 15K 20K 25K 25297 25136 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU b a 4 8 12 16 20 17.32 17.47 MIN: 10.28 / MAX: 33.98 MIN: 14.95 / MAX: 28.95 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU b a 150 300 450 600 750 692.14 686.15 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
spaCy Model: en_core_web_trf OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_trf a b 300 600 900 1200 1500 1465 1463
spaCy Model: en_core_web_lg OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_lg a b 3K 6K 9K 12K 15K 12454 12319
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU a b 4 8 12 16 20 16.75 16.84 MIN: 11.9 / MAX: 43.39 MIN: 10.79 / MAX: 39.98 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU a b 150 300 450 600 750 715.98 712.18 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16 - Device: CPU a b 5 10 15 20 25 18.11 18.36 MIN: 10.34 / MAX: 38.17 MIN: 15.89 / MAX: 37.21 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16 - Device: CPU a b 140 280 420 560 700 661.93 653.16 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU b a 7 14 21 28 35 28.40 28.43 MIN: 14.44 / MAX: 59.77 MIN: 14.41 / MAX: 59.57 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU b a 200 400 600 800 1000 844.82 843.69 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16 - Device: CPU a b 7 14 21 28 35 31.53 31.82 MIN: 16.43 / MAX: 53.16 MIN: 20.13 / MAX: 53.75 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16 - Device: CPU a b 80 160 240 320 400 380.28 376.70 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 0.2385 0.477 0.7155 0.954 1.1925 1.06 1.06 MIN: 0.6 / MAX: 11.35 MIN: 0.6 / MAX: 12.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 5K 10K 15K 20K 25K 22553.27 22417.03 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 0.2565 0.513 0.7695 1.026 1.2825 1.13 1.14 MIN: 0.62 / MAX: 11.19 MIN: 0.63 / MAX: 12.31 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 5K 10K 15K 20K 25K 21060.95 20837.59 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Facebook RocksDB Test: Random Fill OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill a b 200K 400K 600K 800K 1000K 811216 800273 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Update Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Update Random a b 150K 300K 450K 600K 750K 684330 651892 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read Random Write Random a b 600K 1200K 1800K 2400K 3000K 2683538 2589021 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
GraphicsMagick Operation: Resizing OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing b a 500 1000 1500 2000 2500 2132 2129 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Enhanced OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced b a 120 240 360 480 600 570 568 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Sharpen OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen b a 80 160 240 320 400 376 376 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Rotate OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate b a 140 280 420 560 700 649 638 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Facebook RocksDB Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read While Writing a b 1000K 2000K 3000K 4000K 5000K 4506140 4452658 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Random Read OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Read b a 20M 40M 60M 80M 100M 100100766 98049490 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
GraphicsMagick Operation: Swirl OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Swirl b a 300 600 900 1200 1500 1193 1178 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Noise-Gaussian OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian b a 120 240 360 480 600 535 514 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: HWB Color Space OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space b a 200 400 600 800 1000 1089 1043 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
AOM AV1 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K b a 0.0765 0.153 0.2295 0.306 0.3825 0.34 0.34 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
JPEG XL Decoding libjxl CPU Threads: 1 OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: 1 b a 10 20 30 40 50 45.67 44.60
AOM AV1 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K b a 3 6 9 12 15 13.57 13.54 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: BMW27 - Compute: CPU-Only a b 12 24 36 48 60 53.91 53.95
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream b a 9 18 27 36 45 36.48 37.44
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream b a 6 12 18 24 30 27.41 26.70
Facebook RocksDB Test: Sequential Fill OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Sequential Fill a b 200K 400K 600K 800K 1000K 922415 876834 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Stargate Digital Audio Workstation Sample Rate: 44100 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 44100 - Buffer Size: 512 a b 0.8759 1.7518 2.6277 3.5036 4.3795 3.892921 2.678564 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
AOM AV1 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p b a 4 8 12 16 20 14.23 14.13 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b 20 40 60 80 100 74.70 75.08
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b 3 6 9 12 15 13.39 13.32
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b 20 40 60 80 100 74.92 75.03
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b 3 6 9 12 15 13.35 13.33
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a b 6 12 18 24 30 23.16 24.34
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a b 10 20 30 40 50 43.16 41.08
Stargate Digital Audio Workstation Sample Rate: 480000 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 480000 - Buffer Size: 512 a b 0.9275 1.855 2.7825 3.71 4.6375 4.122029 2.722708 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 4 - Input: Bosphorus 4K b a 0.7922 1.5844 2.3766 3.1688 3.961 3.521 3.500 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Xmrig Variant: Wownero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Wownero - Hash Count: 1M a b 5K 10K 15K 20K 25K 21153.3 21018.6 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 2 a b 11 22 33 44 55 47.56 47.64 1. (CXX) g++ options: -O3 -fPIC -lm
Timed Wasmer Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile a b 11 22 33 44 55 46.86 46.96 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
Stargate Digital Audio Workstation Sample Rate: 96000 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 96000 - Buffer Size: 1024 b a 0.8652 1.7304 2.5956 3.4608 4.326 3.845343 3.681535 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
FFmpeg Encoder: libx264 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Live b a 40 80 120 160 200 200.46 198.58 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Live OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Live b a 6 12 18 24 30 25.19 25.43 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: defconfig a b 10 20 30 40 50 43.94 44.00
Timed PHP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 8.1.9 Time To Compile a b 10 20 30 40 50 43.61 43.70
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: GoogLeNet a b 10 20 30 40 50 44.45 44.39
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 77.45 77.51
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 154.89 154.76
EnCodec Target Bandwidth: 24 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 24 kbps a b 10 20 30 40 50 42.08 42.31
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 111.39 112.01
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 107.48 106.87
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b 3 6 9 12 15 11.94 11.96
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b 20 40 60 80 100 83.67 83.59
WebP Image Encode Encode Settings: Quality 100, Lossless, Highest Compression OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Lossless, Highest Compression b a 0.1373 0.2746 0.4119 0.5492 0.6865 0.61 0.61 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
Numenta Anomaly Benchmark Detector: Contextual Anomaly Detector OSE OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE a b 9 18 27 36 45 38.98 39.16
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b 12 24 36 48 60 53.96 54.02
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b 50 100 150 200 250 222.16 221.88
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM b a 40 80 120 160 200 160.7 160.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM b a 90 180 270 360 450 403.8 402.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
EnCodec Target Bandwidth: 3 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 3 kbps a b 9 18 27 36 45 38.19 38.43
EnCodec Target Bandwidth: 6 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 6 kbps a b 9 18 27 36 45 37.60 37.90
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream a b 4 8 12 16 20 16.69 16.78
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream a b 13 26 39 52 65 59.87 59.54
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream b a 2 4 6 8 10 8.7988 8.8688
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream b a 30 60 90 120 150 113.55 112.66
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Ultra b a 60 120 180 240 300 286.5 248.6
Unvanquished Resolution: 1920 x 1080 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: High b a 60 120 180 240 300 263.3 257.0
EnCodec Target Bandwidth: 1.5 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 1.5 kbps a b 8 16 24 32 40 36.01 36.37
Redis Test: SET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 a b 400K 800K 1200K 1600K 2000K 1746939.25 1590308.75 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
ASTC Encoder Preset: Exhaustive OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Exhaustive a b 0.3541 0.7082 1.0623 1.4164 1.7705 1.5739 1.5693 1. (CXX) g++ options: -O3 -flto -pthread
Redis Test: SET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 b a 400K 800K 1200K 1600K 2000K 1797429.5 1784408.0 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Medium OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Medium b a 60 120 180 240 300 262.8 260.0
Stargate Digital Audio Workstation Sample Rate: 480000 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 480000 - Buffer Size: 1024 b a 1.1693 2.3386 3.5079 4.6772 5.8465 5.196953 4.917282 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
srsRAN Test: OFDM_Test OpenBenchmarking.org Samples / Second, More Is Better srsRAN 22.04.1 Test: OFDM_Test b a 30M 60M 90M 120M 150M 144000000 140100000 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM a b 30 60 90 120 150 150.6 150.1 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM a b 80 160 240 320 400 376.6 375.1 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
Stargate Digital Audio Workstation Sample Rate: 44100 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 44100 - Buffer Size: 1024 b a 1.1941 2.3882 3.5823 4.7764 5.9705 5.307245 5.223813 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
Scikit-Learn Benchmark: TSNE MNIST Dataset OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.1.3 Benchmark: TSNE MNIST Dataset a b 7 14 21 28 35 31.59 31.93
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: AlexNet b a 14 28 42 56 70 63.15 62.47
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating a b 40K 80K 120K 160K 200K 177640 170171 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating b a 40K 80K 120K 160K 200K 168680 167681 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Stress-NG Test: Glibc C String Functions OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Glibc C String Functions a b 900K 1800K 2700K 3600K 4500K 4187483.04 4118591.13 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Context Switching OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Context Switching b a 2M 4M 6M 8M 10M 9774697.61 9618965.78 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Atomic OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Atomic b a 90K 180K 270K 360K 450K 421495.46 421253.99 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Futex a b 700K 1400K 2100K 2800K 3500K 3430545.94 3166174.89 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Memory Copying OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Memory Copying a b 1000 2000 3000 4000 5000 4863.37 4839.31 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: IO_uring OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: IO_uring a b 5K 10K 15K 20K 25K 22416.27 17193.98 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Malloc OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Malloc a b 11M 22M 33M 44M 55M 50825608.57 50473178.15 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MEMFD a b 200 400 600 800 1000 891.86 887.76 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: CPU Stress OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Stress b a 14K 28K 42K 56K 70K 65071.90 63007.36 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Forking OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Forking a b 11K 22K 33K 44K 55K 53674.14 52953.36 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: NUMA OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: NUMA a b 130 260 390 520 650 604.44 591.09 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: System V Message Passing OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: System V Message Passing a b 2M 4M 6M 8M 10M 9343696.19 9276368.69 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Semaphores OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Semaphores a b 1000K 2000K 3000K 4000K 5000K 4869829.14 4868774.73 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: MMAP OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MMAP a b 80 160 240 320 400 383.32 382.19 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Glibc Qsort Data Sorting OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Glibc Qsort Data Sorting a b 80 160 240 320 400 385.80 385.67 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Vector Math a b 40K 80K 120K 160K 200K 178859.70 178430.01 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: CPU Cache OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Cache b a 40 80 120 160 200 186.95 174.95 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Socket Activity OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Socket Activity b a 3K 6K 9K 12K 15K 15707.68 15355.92 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Crypto OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Crypto a b 10K 20K 30K 40K 50K 44751.55 44653.67 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Mutex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Mutex b a 3M 6M 9M 12M 15M 12810594.01 12785778.73 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Matrix Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Matrix Math a b 30K 60K 90K 120K 150K 140597.25 138081.21 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: SENDFILE OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: SENDFILE a b 90K 180K 270K 360K 450K 414239.19 411184.40 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Redis Test: GET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 a b 500K 1000K 1500K 2000K 2500K 2271735.25 2214977.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: GET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 50 a b 500K 1000K 1500K 2000K 2500K 2481064.75 2015000.12 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship b a 0.945 1.89 2.835 3.78 4.725 4.2 4.2
rav1e Speed: 1 OpenBenchmarking.org Frames Per Second, More Is Better rav1e 0.6.1 Speed: 1 a b 0.1755 0.351 0.5265 0.702 0.8775 0.780 0.778
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.7.10.9513 Pi Digits To Calculate: 1B a b 5 10 15 20 25 22.83 22.92
AOM AV1 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K a b 6 12 18 24 30 25.31 24.61 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
JPEG XL Decoding libjxl CPU Threads: All OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: All b a 60 120 180 240 300 251.91 245.68
AOM AV1 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p a b 8 16 24 32 40 34.60 34.15 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM b a 13 26 39 52 65 60.2 60.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM b a 20 40 60 80 100 107.1 106.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
rav1e Speed: 5 OpenBenchmarking.org Frames Per Second, More Is Better rav1e 0.6.1 Speed: 5 b a 0.6183 1.2366 1.8549 2.4732 3.0915 2.748 2.741
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b 2 4 6 8 10 8.046 7.877 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
AOM AV1 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p b a 0.2183 0.4366 0.6549 0.8732 1.0915 0.97 0.96 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b 2 4 6 8 10 6.06539 6.16553 MIN: 3.72 MIN: 3.99 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU a b 0.3921 0.7842 1.1763 1.5684 1.9605 1.68952 1.74252 MIN: 1.63 MIN: 1.63 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
ASTC Encoder Preset: Thorough OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Thorough a b 4 8 12 16 20 14.41 14.38 1. (CXX) g++ options: -O3 -flto -pthread
Numenta Anomaly Benchmark Detector: Bayesian Changepoint OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint a b 5 10 15 20 25 20.15 20.56
AOM AV1 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K b a 7 14 21 28 35 32.16 31.97 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM a b 40 80 120 160 200 175.5 170.0 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM a b 90 180 270 360 450 413.1 401.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
FLAC Audio Encoding WAV To FLAC OpenBenchmarking.org Seconds, Fewer Is Better FLAC Audio Encoding 1.4 WAV To FLAC a b 4 8 12 16 20 17.80 17.96 1. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
rav1e Speed: 6 OpenBenchmarking.org Frames Per Second, More Is Better rav1e 0.6.1 Speed: 6 a b 0.8226 1.6452 2.4678 3.2904 4.113 3.656 3.641
WebP Image Encode Encode Settings: Quality 100, Lossless OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Lossless b a 0.342 0.684 1.026 1.368 1.71 1.52 1.50 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
AOM AV1 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K b a 9 18 27 36 45 40.76 40.29 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
AOM AV1 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K b a 9 18 27 36 45 40.93 40.37 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Timed CPython Compilation Build Configuration: Default OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Default b a 4 8 12 16 20 15.41 15.43
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM a b 40 80 120 160 200 162.9 162.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM a b 80 160 240 320 400 376.6 375.9 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a b 0.3472 0.6944 1.0416 1.3888 1.736 1.50644 1.54322 MIN: 1.38 MIN: 1.37 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU a b 0.3309 0.6618 0.9927 1.3236 1.6545 1.43493 1.47047 MIN: 1.26 MIN: 1.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
AOM AV1 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p b a 10 20 30 40 50 43.34 43.08 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b 13 26 39 52 65 56.69 56.49 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.7.10.9513 Pi Digits To Calculate: 500M a b 3 6 9 12 15 11.14 11.22
Numenta Anomaly Benchmark Detector: Relative Entropy OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy b a 3 6 9 12 15 13.11 13.21
SMHasher Hash: FarmHash128 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: FarmHash128 a b 14 28 42 56 70 63.80 63.86 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: FarmHash128 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: FarmHash128 a b 3K 6K 9K 12K 15K 15823.93 15708.02 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
rav1e Speed: 10 OpenBenchmarking.org Frames Per Second, More Is Better rav1e 0.6.1 Speed: 10 a b 2 4 6 8 10 8.018 7.997
ASTC Encoder Preset: Fast OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast a b 80 160 240 320 400 366.33 364.32 1. (CXX) g++ options: -O3 -flto -pthread
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU b a 1.009 2.018 3.027 4.036 5.045 4.27034 4.48456 MIN: 4.03 MIN: 4.29 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU b a 2 4 6 8 10 4.74470 6.54191 MIN: 4.47 MIN: 6.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
SMHasher Hash: MeowHash x86_64 AES-NI OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: MeowHash x86_64 AES-NI a b 13 26 39 52 65 57.72 57.93 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: MeowHash x86_64 AES-NI OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: MeowHash x86_64 AES-NI a b 8K 16K 24K 32K 40K 37533.1 37302.0 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: Spooky32 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: Spooky32 b a 12 24 36 48 60 51.63 51.66 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: Spooky32 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: Spooky32 a b 3K 6K 9K 12K 15K 14254.70 14206.73 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
AOM AV1 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p b a 14 28 42 56 70 62.08 60.72 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
SMHasher Hash: FarmHash32 x86_64 AVX OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: FarmHash32 x86_64 AVX a b 10 20 30 40 50 43.28 43.29 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: FarmHash32 x86_64 AVX OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: FarmHash32 x86_64 AVX a b 6K 12K 18K 24K 30K 27249.95 26861.97 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b 1.3319 2.6638 3.9957 5.3276 6.6595 5.89944 5.91961 MIN: 5.84 MIN: 5.83 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU a b 0.175 0.35 0.525 0.7 0.875 0.652318 0.777800 MIN: 0.56 MIN: 0.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
SMHasher Hash: fasthash32 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: fasthash32 a b 9 18 27 36 45 37.18 37.26 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: fasthash32 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: fasthash32 a b 1400 2800 4200 5600 7000 6591.27 6553.70 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
AOM AV1 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p b a 16 32 48 64 80 73.28 72.59 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
AOM AV1 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p b a 16 32 48 64 80 73.76 72.52 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
SMHasher Hash: t1ha2_atonce OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: t1ha2_atonce a b 8 16 24 32 40 34.76 35.02 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: t1ha2_atonce OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: t1ha2_atonce a b 3K 6K 9K 12K 15K 15569.87 15465.65 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: t1ha0_aes_avx2 x86_64 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: t1ha0_aes_avx2 x86_64 a b 8 16 24 32 40 34.72 35.16 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: t1ha0_aes_avx2 x86_64 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: t1ha0_aes_avx2 x86_64 a b 14K 28K 42K 56K 70K 66642.16 65731.67 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
ASTC Encoder Preset: Medium OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Medium a b 30 60 90 120 150 120.33 120.08 1. (CXX) g++ options: -O3 -flto -pthread
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6, Lossless b a 2 4 6 8 10 7.571 7.607 1. (CXX) g++ options: -O3 -fPIC -lm
WebP Image Encode Encode Settings: Quality 100, Highest Compression OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Highest Compression a b 0.774 1.548 2.322 3.096 3.87 3.44 3.41 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 12 - Input: Bosphorus 4K a b 30 60 90 120 150 147.34 129.52 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Unpacking The Linux Kernel linux-5.19.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel 5.19 linux-5.19.tar.xz a b 2 4 6 8 10 7.006 7.096
SMHasher Hash: wyhash OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: wyhash a b 6 12 18 24 30 26.33 26.39 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: wyhash OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: wyhash a b 5K 10K 15K 20K 25K 23068.13 22791.83 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 8 - Input: Bosphorus 1080p b a 30 60 90 120 150 113.86 111.93 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Numenta Anomaly Benchmark Detector: Windowed Gaussian OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian a b 2 4 6 8 10 6.199 6.327
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU b a 3 6 9 12 15 9.15733 9.16751 MIN: 8.99 MIN: 9.09 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b 3 6 9 12 15 9.02473 9.42618 MIN: 8.88 MIN: 8.8 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 13 - Input: Bosphorus 4K b a 30 60 90 120 150 148.19 145.75 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 10, Lossless b a 1.1594 2.3188 3.4782 4.6376 5.797 5.078 5.153 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6 b a 0.9027 1.8054 2.7081 3.6108 4.5135 4.008 4.012 1. (CXX) g++ options: -O3 -fPIC -lm
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b 0.6068 1.2136 1.8204 2.4272 3.034 2.63187 2.69706 MIN: 2.56 MIN: 2.55 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
WebP2 Image Encode Encode Settings: Quality 100, Compression Effort 5 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 100, Compression Effort 5 b a 2 4 6 8 10 8.43 8.39 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b 0.4722 0.9444 1.4166 1.8888 2.361 2.06629 2.09863 MIN: 1.98 MIN: 1.97 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
WebP2 Image Encode Encode Settings: Default OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default b a 3 6 9 12 15 10.27 9.60 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 12 - Input: Bosphorus 1080p b a 90 180 270 360 450 403.43 400.81 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
WebP Image Encode Encode Settings: Quality 100 OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100 a b 3 6 9 12 15 11.10 11.04 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b 80 160 240 320 400 359.56 359.18 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
WebP Image Encode Encode Settings: Default OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Default a b 4 8 12 16 20 17.87 17.78 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
Stream Type: Add OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Add b a 8K 16K 24K 32K 40K 37116.0 36849.4 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
Stream Type: Triad OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Triad b a 8K 16K 24K 32K 40K 37021.8 36787.1 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
Stream Type: Scale OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Scale b a 7K 14K 21K 28K 35K 33173.5 33038.0 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
Phoronix Test Suite v10.8.5