threadripper eo 2022 AMD Ryzen Threadripper 3960X 24-Core testing with a MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS) and Gigabyte AMD Radeon RX 5500 XT 8GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2212279-NE-THREADRIP40&sor&grs .
threadripper eo 2022 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution a b AMD Ryzen Threadripper 3960X 24-Core @ 3.80GHz (24 Cores / 48 Threads) MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS) AMD Starship/Matisse 32GB 1000GB Sabrent Rocket 4.0 1TB Gigabyte AMD Radeon RX 5500 XT 8GB (1900/875MHz) AMD Navi 10 HDMI Audio VA2431 Aquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200 Ubuntu 22.04 5.19.0-051900rc7-generic (x86_64) GNOME Shell 42.2 X Server 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47) 1.3.204 GCC 11.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301025 Graphics Details - BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: xxx-xxx-xxx Java Details - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) Python Details - Python 3.10.4 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
threadripper eo 2022 pgbench: 100 - 250 - Read Write - Average Latency pgbench: 100 - 250 - Read Write stargate: 480000 - 512 stargate: 44100 - 512 onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU stress-ng: IO_uring redis: GET - 50 pgbench: 100 - 100 - Read Write pgbench: 100 - 100 - Read Write - Average Latency onednn: IP Shapes 3D - u8s8f32 - CPU unvanquished: 1920 x 1080 - Ultra svt-av1: Preset 12 - Bosphorus 4K onednn: Recurrent Neural Network Inference - f32 - CPU redis: SET - 50 mnn: inception-v3 pgbench: 100 - 100 - Read Only - Average Latency pgbench: 100 - 100 - Read Only stress-ng: Futex mnn: mobilenetV3 ncnn: CPU - resnet50 stargate: 192000 - 1024 webp2: Default stress-ng: CPU Cache ncnn: CPU - googlenet onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU mnn: resnet-v2-50 onednn: Recurrent Neural Network Inference - u8s8f32 - CPU spark: 1000000 - 100 - SHA-512 Benchmark Time stargate: 480000 - 1024 ncnn: CPU - squeezenet_ssd mnn: MobileNetV2_224 stargate: 192000 - 512 ncnn: CPU - alexnet onednn: Recurrent Neural Network Training - f32 - CPU ncnn: CPU - resnet18 rocksdb: Seq Fill deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU rocksdb: Update Rand spark: 1000000 - 100 - Broadcast Inner Join Test Time pgbench: 100 - 250 - Read Only - Average Latency mnn: nasnet spark: 1000000 - 100 - Repartition Test Time pgbench: 100 - 250 - Read Only ncnn: CPU - mobilenet stargate: 96000 - 1024 onednn: Convolution Batch Shapes Auto - f32 - CPU graphics-magick: HWB Color Space compress-7zip: Decompression Rating graphics-magick: Noise-Gaussian ncnn: CPU - mnasnet rocksdb: Read Rand Write Rand numenta-nab: KNN CAD stress-ng: CPU Stress ncnn: CPU - efficientnet-b0 srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM ncnn: CPU - shufflenet-v2 onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU ncnn: CPU - vision_transformer ncnn: CPU-v2-v2 - mobilenet-v2 srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM openvino: Face Detection FP16 - CPU ncnn: CPU - yolov4-tiny mnn: squeezenetv1.1 nginx: 500 aom-av1: Speed 6 Realtime - Bosphorus 4K ncnn: CPU-v3-v3 - mobilenet-v3 srsran: OFDM_Test deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream openvino: Face Detection FP16 - CPU clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cache redis: GET - 500 jpegxl-decode: All onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU spark: 1000000 - 100 - Inner Join Test Time unvanquished: 1920 x 1080 - High onednn: IP Shapes 1D - f32 - CPU jpegxl-decode: 1 spark: 1000000 - 100 - Group By Test Time openvino: Machine Translation EN To DE FP16 - CPU stress-ng: Socket Activity openvino: Machine Translation EN To DE FP16 - CPU stress-ng: NUMA aom-av1: Speed 8 Realtime - Bosphorus 1080p ncnn: CPU - blazeface svt-av1: Preset 4 - Bosphorus 1080p rocksdb: Rand Read numenta-nab: Windowed Gaussian numenta-nab: Bayesian Changepoint stargate: 96000 - 512 openfoam: drivaerFastback, Small Mesh Size - Execution Time clickhouse: 100M Rows Web Analytics Dataset, Third Run stress-ng: Matrix Math openradioss: Bumper Beam svt-av1: Preset 8 - Bosphorus 1080p graphics-magick: Rotate nginx: 100 aom-av1: Speed 10 Realtime - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 4K stress-ng: Glibc C String Functions onednn: Deconvolution Batch shapes_1d - f32 - CPU stress-ng: Context Switching stargate: 44100 - 1024 onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU openfoam: drivaerFastback, Small Mesh Size - Mesh Time avifenc: 10, Lossless smhasher: FarmHash32 x86_64 AVX spark: 1000000 - 100 - Calculate Pi Benchmark aom-av1: Speed 9 Realtime - Bosphorus 4K smhasher: t1ha0_aes_avx2 x86_64 openvino: Weld Porosity Detection FP16 - CPU rocksdb: Rand Fill stress-ng: Forking pgbench: 100 - 50 - Read Only - Average Latency openvino: Weld Porosity Detection FP16 - CPU webp: Quality 100, Lossless aom-av1: Speed 6 Two-Pass - Bosphorus 1080p unpack-linux: linux-5.19.tar.xz jpegxl: JPEG - 80 ncnn: CPU - vgg16 graphics-magick: Swirl stream: Copy smhasher: wyhash rocksdb: Read While Writing cockroach: MoVR - 1024 pgbench: 100 - 50 - Read Only cockroach: MoVR - 128 ffmpeg: libx265 - Upload aom-av1: Speed 10 Realtime - Bosphorus 4K ffmpeg: libx265 - Upload openradioss: Rubber O-Ring Seal Installation jpegxl: PNG - 90 spacy: en_core_web_lg nginx: 1000 mnn: mobilenet-v1-1.0 tensorflow: CPU - 16 - AlexNet unvanquished: 1920 x 1080 - Medium scikit-learn: TSNE MNIST Dataset openvino: Age Gender Recognition Retail 0013 FP16 - CPU aom-av1: Speed 0 Two-Pass - Bosphorus 1080p cockroach: KV, 60% Reads - 512 cockroach: KV, 50% Reads - 256 encodec: 1.5 kbps scikit-learn: MNIST Dataset ffmpeg: libx264 - Live aom-av1: Speed 9 Realtime - Bosphorus 1080p openvino: Vehicle Detection FP16 - CPU ffmpeg: libx264 - Live cockroach: MoVR - 512 openvino: Person Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU clickhouse: 100M Rows Web Analytics Dataset, Second Run encode-flac: WAV To FLAC dragonflydb: 50 - 5:1 openvino: Age Gender Recognition Retail 0013 FP16 - CPU cockroach: MoVR - 256 webp: Quality 100, Highest Compression openvino: Person Vehicle Bike Detection FP16 - CPU cockroach: KV, 50% Reads - 128 openvino: Person Vehicle Bike Detection FP16 - CPU mnn: SqueezeNetV1.0 jpegxl: JPEG - 90 deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream encodec: 6 kbps numenta-nab: Relative Entropy deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream jpegxl: PNG - 80 openradioss: INIVOL and Fluid Structure Interaction Drop Container openvino: Person Detection FP16 - CPU build-erlang: Time To Compile stress-ng: SENDFILE cockroach: KV, 10% Reads - 128 smhasher: FarmHash128 redis: SET - 500 stress-ng: System V Message Passing stream: Add y-cruncher: 500M aom-av1: Speed 4 Two-Pass - Bosphorus 1080p stress-ng: Malloc smhasher: t1ha2_atonce cockroach: KV, 10% Reads - 512 svt-av1: Preset 12 - Bosphorus 1080p encodec: 3 kbps xmrig: Wownero - 1M rocksdb: Rand Fill Sync stream: Triad dragonflydb: 50 - 1:1 smhasher: MeowHash x86_64 AES-NI openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU aom-av1: Speed 6 Realtime - Bosphorus 1080p openradioss: Cell Phone Drop Test svt-av1: Preset 4 - Bosphorus 4K compress-7zip: Compression Rating aom-av1: Speed 8 Realtime - Bosphorus 4K cockroach: KV, 95% Reads - 256 build-nodejs: Time To Compile openvino: Person Detection FP32 - CPU smhasher: fasthash32 deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream ffmpeg: libx264 - Upload deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream ffmpeg: libx264 - Upload encodec: 24 kbps astcenc: Fast openvkl: vklBenchmark Scalar webp: Quality 100 openvino: Vehicle Detection FP16-INT8 - CPU openradioss: Bird Strike on Windshield openvino: Vehicle Detection FP16-INT8 - CPU openfoam: motorBike - Mesh Time avifenc: 0 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream webp: Default stress-ng: Memory Copying blender: Pabellon Barcelona - CPU-Only webp2: Quality 100, Compression Effort 5 avifenc: 6, Lossless cockroach: KV, 50% Reads - 512 deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream build-linux-kernel: allmodconfig openvino: Person Detection FP32 - CPU stress-ng: MEMFD numenta-nab: Contextual Anomaly Detector OSE dragonflydb: 200 - 1:5 srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM brl-cad: VGR Performance Metric ffmpeg: libx265 - Video On Demand rav1e: 6 pgbench: 100 - 50 - Read Write - Average Latency stream: Scale srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM cockroach: KV, 60% Reads - 128 y-cruncher: 1B nekrs: TurboPipe Periodic ffmpeg: libx265 - Video On Demand aom-av1: Speed 4 Two-Pass - Bosphorus 4K ffmpeg: libx264 - Platform blender: Barbershop - CPU-Only cockroach: KV, 50% Reads - 1024 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream pgbench: 100 - 50 - Read Write ffmpeg: libx264 - Platform cockroach: KV, 95% Reads - 1024 svt-av1: Preset 8 - Bosphorus 4K deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream graphics-magick: Enhanced openvkl: vklBenchmark ISPC onednn: IP Shapes 3D - f32 - CPU smhasher: Spooky32 srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM dragonflydb: 200 - 5:1 srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM cockroach: KV, 60% Reads - 1024 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream stress-ng: MMAP astcenc: Exhaustive srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM ffmpeg: libx264 - Video On Demand ffmpeg: libx264 - Video On Demand rav1e: 10 rav1e: 1 rav1e: 5 numenta-nab: Earthgecko Skyline stress-ng: Vector Math spark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframe openvino: Face Detection FP16-INT8 - CPU nginx: 200 aom-av1: Speed 6 Two-Pass - Bosphorus 4K cockroach: KV, 60% Reads - 256 stress-ng: Crypto blender: Fishy Cat - CPU-Only cockroach: KV, 95% Reads - 512 cockroach: KV, 10% Reads - 1024 ffmpeg: libx265 - Platform build-wasmer: Time To Compile astcenc: Thorough build-php: Time To Compile astcenc: Medium stress-ng: Mutex ffmpeg: libx265 - Platform deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM blender: Classroom - CPU-Only avifenc: 2 deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream smhasher: SHA3-256 deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream ncnn: CPU - regnety_400m openvino: Face Detection FP16-INT8 - CPU build-python: Released Build, PGO + LTO Optimized graphics-magick: Resizing spacy: en_core_web_trf tensorflow: CPU - 16 - GoogLeNet build-linux-kernel: defconfig openvino: Weld Porosity Detection FP16-INT8 - CPU deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream build-python: Default deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream ncnn: CPU - FastestDet onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU cockroach: KV, 10% Reads - 256 ffmpeg: libx265 - Live svt-av1: Preset 13 - Bosphorus 1080p openvino: Weld Porosity Detection FP16-INT8 - CPU dragonflydb: 50 - 1:5 scikit-learn: Sparse Rand Projections, 100 Iterations ffmpeg: libx265 - Live avifenc: 6 deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream blender: BMW27 - CPU-Only stress-ng: Atomic xmrig: Monero - 1M cockroach: KV, 95% Reads - 128 stress-ng: Glibc Qsort Data Sorting stress-ng: Semaphores openfoam: motorBike - Execution Time dragonflydb: 200 - 1:1 natron: Spaceship openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU aom-av1: Speed 0 Two-Pass - Bosphorus 4K graphics-magick: Sharpen srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM webp2: Quality 95, Compression Effort 7 webp2: Quality 75, Compression Effort 7 webp: Quality 100, Lossless, Highest Compression jpegxl: JPEG - 100 jpegxl: PNG - 100 smhasher: MeowHash x86_64 AES-NI smhasher: t1ha0_aes_avx2 x86_64 smhasher: FarmHash32 x86_64 AVX smhasher: t1ha2_atonce smhasher: FarmHash128 smhasher: fasthash32 smhasher: Spooky32 smhasher: SHA3-256 smhasher: wyhash a b 5.506 45407 4.122029 3.892921 6.54191 22416.27 2481064.75 40535 2.467 0.652318 248.6 147.341 1295.49 1746939.25 23.254 0.153 653213 3430545.94 2.282 21.43 2.436915 9.6 174.95 18.44 2604.63 2559.09 1361.72 21.564 1366.14 3.40 4.917282 21.24 4.361 1.244804 8.95 2614.46 12.14 922415 23.1605 43.1635 4.48456 684330 1.28 0.386 14.311 1.76 646832 15.69 3.681535 9.02473 1043 177640 514 6.29 2683538 129.147 63007.36 9.26 175.5 7.89 1.68952 133.48 6.92 413.1 6.82 25.55 4.311 133262.41 25.31 6.44 140100000 69.4812 37.4433 26.7006 172.6456 1735.28 196.74 2271735.25 245.68 2.63187 1.43493 1.67 257 1.50644 44.6 4.67 63.47 15355.92 188.92 604.44 60.72 3.67 8.046 98049490 6.199 20.148 1.911816 121.45626 240.17 140597.25 98.86 111.927 638 148017.08 72.52 145.752 4187483.04 6.06539 9618965.78 5.223813 2.06629 26.871733 5.153 27249.95 68.960273121 40.37 66642.16 18.11 811216 53674.14 0.074 661.93 1.50 34.6 7.006 8.59 33.71 1178 54649.6 23068.13 4506140 474.9 671452 477.5 183.908496366 40.29 13.73 78.18 8.89 12454 125263.22 3.054 62.47 260 31.586 21060.95 0.96 80024 77546.7 36.013 103.896 25.43 72.59 380.28 198.58 476.6 4.3 31.53 236.78 17.802 3193072.56 1.13 476.5 3.44 686.15 70558.3 17.47 6.491 8.51 8.8688 37.604 13.21 112.6594 8.95 287.95 2734.68 90.928 414239.19 50479.2 15823.93 1784408 9343696.19 36849.4 11.143 14.13 50825608.57 15569.87 61112 400.811 38.187 21153.3 25136 36787.1 3341824.71 37533.1 22553.27 43.08 54.86 3.5 167681 31.97 100761.5 232.345 2750.06 6591.27 107.4771 12.37 59.8698 111.3907 16.6914 204.11375323 42.075 366.3281 184 11.10 16.75 173.48 715.98 40.3729 91.039 13.3862 74.6969 17.87 4863.37 176.4 8.39 7.607 75643.3 152.2314 476.667 4.33 891.86 38.983 3609156.11 162.9 405453 264.63 3.656 1.712 33038 376.6 77055.3 22.825 67648400000 28.63 7.91 160.71 583.81 71952.1 608.4782 29212 47.13 94157.8 56.689 78.7202 568 288 5.89944 14254.7 150.6 402.5 3273370.07 160.2 76564 19.6428 383.32 1.5739 106.8 47.17 160.60 8.018 0.78 2.741 77.22 178859.7 4.20 8.46 148446.23 13.54 82286.1 44751.55 69.44 98031.9 59301.4 28.62 46.857 14.4059 43.61 120.3254 12785778.73 264.66 19.5601 376.6 148.83 47.559 74.9151 13.3472 146.7 611.8922 26.61 1410.67 257.57 2129 1465 44.45 43.944 843.69 222.1611 15.429 53.9562 8.91 9.16751 61211.1 72.95 359.56 28.43 3519377.56 162.422 69.22 4.012 11.9444 83.6707 154.8854 77.4472 53.91 421253.99 15342.8 103073.5 385.8 4869829.14 71.6006 3388128.93 4.2 1.06 0.34 376 60.2 0.15 0.30 0.61 0.66 0.68 57.723 34.721 43.284 34.763 63.803 37.175 51.661 2649.376 26.329 9.786 25548 2.722708 2.678564 4.7447 17193.98 2015000.12 33658 2.971 0.7778 286.5 129.515 1451.46 1590308.75 25.51 0.166 602288 3166174.89 2.455 23.05 2.268748 10.27 186.95 19.69 2772.01 2719.77 1441.78 22.828 1446.12 3.59872799 5.196953 22.44 4.607 1.314658 9.45 2759.2 12.78 876834 24.336 41.0795 4.27034 651892 1.34 0.404 14.962 1.84 618774 16.39 3.845343 9.42618 1089 170171 535 6.54 2589021 133.475 65071.9 9.56 170 8.14 1.74252 137.59 7.13 401.2 7.02 26.29 4.191 137072.73 24.61 6.62 144000000 67.6453 36.475 27.4093 177.2167 1690.53 201.86 2214977.25 251.91 2.69706 1.47047 1.63 263.3 1.54322 45.67 4.78 62.04 15707.68 193.23 591.09 62.08 3.75 7.877 100100766 6.327 20.56 1.947376 123.71063 235.82 138081.21 100.6 113.858 649 145523.99 73.76 148.194 4118591.13 6.16553 9774697.61 5.307245 2.09863 27.275142 5.078 26861.97 69.93 40.93 65731.67 18.36 800273 52953.36 0.075 653.16 1.52 34.15 7.096 8.7 34.14 1193 55314 22791.83 4452658 480.5 663640 483.1 181.78 40.76 13.89 79.08 8.99 12319 126633 3.021 63.15 262.8 31.926 20837.59 0.97 79220.4 76768.6 36.372 104.931 25.19 73.28 376.7 200.46 481.1 4.34 31.82 234.64 17.963 3164997.08 1.14 480.7 3.41 692.14 69948.7 17.32 6.547 8.58 8.7988 37.903 13.106 113.5482 9.02 290.2 2714.24 90.249 411184.4 50108.8 15708.02 1797429.5 9276368.69 37116 11.222 14.23 50473178.15 15465.65 60710 403.432 38.433 21018.6 25297 37021.8 3320752.26 37302 22417.03 43.34 55.19 3.521 168680 32.16 100166.8 233.722 2734.21 6553.7 106.8673 12.30 59.5355 112.0146 16.7848 205.24 42.307 364.3208 183 11.04 16.84 174.41 712.18 40.5873 90.561 13.3177 75.0804 17.78 4839.31 177.27 8.43 7.571 75289.1 152.9447 478.887 4.31 887.76 39.16 3593257.69 162.2 403757 263.53 3.641 1.705 33173.5 375.1 77363.2 22.916 67381400000 28.74 7.88 161.32 586 72222 610.7512 29318 46.96 93822.1 56.487 78.4402 570 287 5.91961 14206.73 150.1 403.8 3263145.89 160.7 76335.4 19.5842 382.19 1.5693 107.1 47.04 161.04 7.997 0.778 2.748 77.408 178430.01 4.21 8.48 148101.85 13.57 82467 44653.67 69.59 97822.4 59175.4 28.68 46.955 14.3759 43.699 120.0841 12810594.01 264.16 19.5237 375.9 149.1 47.639 75.0343 13.326 146.47 610.9332 26.65 1408.6 257.945 2132 1463 44.39 44.003 844.82 221.8827 15.41 54.0192 8.9 9.15733 61278.6 73.03 359.18 28.4 3515736.22 162.588 69.15 4.008 11.9559 83.5943 154.7593 77.5099 53.95 421495.46 15334.8 103111.7 385.67 4868774.73 71.6125 3388656.73 4.2 1.06 0.34 376 60.2 0.15 0.30 0.61 0.66 0.68 57.928 35.155 43.29 35.019 63.864 37.264 51.627 2649.302 26.387 OpenBenchmarking.org
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency a b 3 6 9 12 15 5.506 9.786 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Write a b 10K 20K 30K 40K 50K 45407 25548 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Stargate Digital Audio Workstation Sample Rate: 480000 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 480000 - Buffer Size: 512 a b 0.9275 1.855 2.7825 3.71 4.6375 4.122029 2.722708 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
Stargate Digital Audio Workstation Sample Rate: 44100 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 44100 - Buffer Size: 512 a b 0.8759 1.7518 2.6277 3.5036 4.3795 3.892921 2.678564 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU b a 2 4 6 8 10 4.74470 6.54191 MIN: 4.47 MIN: 6.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Stress-NG Test: IO_uring OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: IO_uring a b 5K 10K 15K 20K 25K 22416.27 17193.98 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Redis Test: GET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 50 a b 500K 1000K 1500K 2000K 2500K 2481064.75 2015000.12 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Write a b 9K 18K 27K 36K 45K 40535 33658 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency a b 0.6685 1.337 2.0055 2.674 3.3425 2.467 2.971 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU a b 0.175 0.35 0.525 0.7 0.875 0.652318 0.777800 MIN: 0.56 MIN: 0.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Ultra b a 60 120 180 240 300 286.5 248.6
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 12 - Input: Bosphorus 4K a b 30 60 90 120 150 147.34 129.52 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b 300 600 900 1200 1500 1295.49 1451.46 MIN: 1280.49 MIN: 1416.39 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Redis Test: SET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 a b 400K 800K 1200K 1600K 2000K 1746939.25 1590308.75 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 a b 6 12 18 24 30 23.25 25.51 MIN: 23.03 / MAX: 23.59 MIN: 25.34 / MAX: 25.86 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency a b 0.0374 0.0748 0.1122 0.1496 0.187 0.153 0.166 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL Scaling Factor: 100 - Clients: 100 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Only a b 140K 280K 420K 560K 700K 653213 602288 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Futex a b 700K 1400K 2100K 2800K 3500K 3430545.94 3166174.89 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 a b 0.5524 1.1048 1.6572 2.2096 2.762 2.282 2.455 MIN: 2.24 / MAX: 2.33 MIN: 2.41 / MAX: 2.64 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
NCNN Target: CPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 a b 6 12 18 24 30 21.43 23.05 MIN: 20.84 / MAX: 22.74 MIN: 21.55 / MAX: 58.12 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Stargate Digital Audio Workstation Sample Rate: 192000 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 192000 - Buffer Size: 1024 a b 0.5483 1.0966 1.6449 2.1932 2.7415 2.436915 2.268748 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
WebP2 Image Encode Encode Settings: Default OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default b a 3 6 9 12 15 10.27 9.60 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
Stress-NG Test: CPU Cache OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Cache b a 40 80 120 160 200 186.95 174.95 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
NCNN Target: CPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet a b 5 10 15 20 25 18.44 19.69 MIN: 17.65 / MAX: 19.84 MIN: 18.17 / MAX: 25.94 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU a b 600 1200 1800 2400 3000 2604.63 2772.01 MIN: 2588.48 MIN: 2730.23 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU a b 600 1200 1800 2400 3000 2559.09 2719.77 MIN: 2544.08 MIN: 2677.09 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU a b 300 600 900 1200 1500 1361.72 1441.78 MIN: 1338.9 MIN: 1407.45 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 a b 5 10 15 20 25 21.56 22.83 MIN: 21.35 / MAX: 21.84 MIN: 22.67 / MAX: 23.06 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU a b 300 600 900 1200 1500 1366.14 1446.12 MIN: 1343.04 MIN: 1407.49 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Apache Spark Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time a b 0.8097 1.6194 2.4291 3.2388 4.0485 3.40000000 3.59872799
Stargate Digital Audio Workstation Sample Rate: 480000 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 480000 - Buffer Size: 1024 b a 1.1693 2.3386 3.5079 4.6772 5.8465 5.196953 4.917282 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
NCNN Target: CPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd a b 5 10 15 20 25 21.24 22.44 MIN: 19.92 / MAX: 23.46 MIN: 20.15 / MAX: 54.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 a b 1.0366 2.0732 3.1098 4.1464 5.183 4.361 4.607 MIN: 4.33 / MAX: 4.48 MIN: 4.57 / MAX: 4.8 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Stargate Digital Audio Workstation Sample Rate: 192000 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 192000 - Buffer Size: 512 b a 0.2958 0.5916 0.8874 1.1832 1.479 1.314658 1.244804 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
NCNN Target: CPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet a b 3 6 9 12 15 8.95 9.45 MIN: 8.42 / MAX: 10.23 MIN: 8.49 / MAX: 10.69 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b 600 1200 1800 2400 3000 2614.46 2759.20 MIN: 2598.48 MIN: 2717.67 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
NCNN Target: CPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 a b 3 6 9 12 15 12.14 12.78 MIN: 11.54 / MAX: 13.22 MIN: 11.63 / MAX: 14.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Facebook RocksDB Test: Sequential Fill OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Sequential Fill a b 200K 400K 600K 800K 1000K 922415 876834 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a b 6 12 18 24 30 23.16 24.34
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a b 10 20 30 40 50 43.16 41.08
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU b a 1.009 2.018 3.027 4.036 5.045 4.27034 4.48456 MIN: 4.03 MIN: 4.29 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Facebook RocksDB Test: Update Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Update Random a b 150K 300K 450K 600K 750K 684330 651892 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Apache Spark Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time a b 0.3015 0.603 0.9045 1.206 1.5075 1.28 1.34
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency a b 0.0909 0.1818 0.2727 0.3636 0.4545 0.386 0.404 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet a b 4 8 12 16 20 14.31 14.96 MIN: 14.19 / MAX: 14.67 MIN: 14.82 / MAX: 15.24 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Apache Spark Row Count: 1000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time a b 0.414 0.828 1.242 1.656 2.07 1.76 1.84
PostgreSQL Scaling Factor: 100 - Clients: 250 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Only a b 140K 280K 420K 560K 700K 646832 618774 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
NCNN Target: CPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mobilenet a b 4 8 12 16 20 15.69 16.39 MIN: 15.48 / MAX: 16.49 MIN: 15.57 / MAX: 24.36 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Stargate Digital Audio Workstation Sample Rate: 96000 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 96000 - Buffer Size: 1024 b a 0.8652 1.7304 2.5956 3.4608 4.326 3.845343 3.681535 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b 3 6 9 12 15 9.02473 9.42618 MIN: 8.88 MIN: 8.8 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
GraphicsMagick Operation: HWB Color Space OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space b a 200 400 600 800 1000 1089 1043 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating a b 40K 80K 120K 160K 200K 177640 170171 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
GraphicsMagick Operation: Noise-Gaussian OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian b a 120 240 360 480 600 535 514 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
NCNN Target: CPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet a b 2 4 6 8 10 6.29 6.54 MIN: 6.17 / MAX: 6.88 MIN: 6.2 / MAX: 7.42 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Facebook RocksDB Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read Random Write Random a b 600K 1200K 1800K 2400K 3000K 2683538 2589021 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Numenta Anomaly Benchmark Detector: KNN CAD OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD a b 30 60 90 120 150 129.15 133.48
Stress-NG Test: CPU Stress OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Stress b a 14K 28K 42K 56K 70K 65071.90 63007.36 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
NCNN Target: CPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 a b 3 6 9 12 15 9.26 9.56 MIN: 9.16 / MAX: 9.93 MIN: 9.17 / MAX: 10.47 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM a b 40 80 120 160 200 175.5 170.0 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
NCNN Target: CPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 a b 2 4 6 8 10 7.89 8.14 MIN: 7.72 / MAX: 8.67 MIN: 7.68 / MAX: 16.58 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU a b 0.3921 0.7842 1.1763 1.5684 1.9605 1.68952 1.74252 MIN: 1.63 MIN: 1.63 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
NCNN Target: CPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer a b 30 60 90 120 150 133.48 137.59 MIN: 133.04 / MAX: 138.19 MIN: 134.63 / MAX: 191.94 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 a b 2 4 6 8 10 6.92 7.13 MIN: 6.79 / MAX: 7.59 MIN: 6.77 / MAX: 8.08 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM a b 90 180 270 360 450 413.1 401.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16 - Device: CPU b a 2 4 6 8 10 7.02 6.82 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
NCNN Target: CPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny a b 6 12 18 24 30 25.55 26.29 MIN: 24.49 / MAX: 30.39 MIN: 24.78 / MAX: 29.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 b a 0.97 1.94 2.91 3.88 4.85 4.191 4.311 MIN: 4.14 / MAX: 4.28 MIN: 4.26 / MAX: 4.37 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
nginx Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 500 b a 30K 60K 90K 120K 150K 137072.73 133262.41 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
AOM AV1 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K a b 6 12 18 24 30 25.31 24.61 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
NCNN Target: CPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 a b 2 4 6 8 10 6.44 6.62 MIN: 6.34 / MAX: 7.1 MIN: 6.24 / MAX: 7.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
srsRAN Test: OFDM_Test OpenBenchmarking.org Samples / Second, More Is Better srsRAN 22.04.1 Test: OFDM_Test b a 30M 60M 90M 120M 150M 144000000 140100000 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b 15 30 45 60 75 69.48 67.65
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream b a 9 18 27 36 45 36.48 37.44
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream b a 6 12 18 24 30 27.41 26.70
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b 40 80 120 160 200 172.65 177.22
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Face Detection FP16 - Device: CPU b a 400 800 1200 1600 2000 1690.53 1735.28 MIN: 1466.54 / MAX: 1987.53 MIN: 1511.37 / MAX: 1909.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
ClickHouse 100M Rows Web Analytics Dataset, First Run / Cold Cache OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, First Run / Cold Cache b a 40 80 120 160 200 201.86 196.74 MIN: 17.35 / MAX: 5454.55 MIN: 17.5 / MAX: 3529.41 1. ClickHouse server version 22.5.4.19 (official build).
Redis Test: GET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 a b 500K 1000K 1500K 2000K 2500K 2271735.25 2214977.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
JPEG XL Decoding libjxl CPU Threads: All OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: All b a 60 120 180 240 300 251.91 245.68
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b 0.6068 1.2136 1.8204 2.4272 3.034 2.63187 2.69706 MIN: 2.56 MIN: 2.55 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU a b 0.3309 0.6618 0.9927 1.3236 1.6545 1.43493 1.47047 MIN: 1.26 MIN: 1.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Apache Spark Row Count: 1000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time b a 0.3758 0.7516 1.1274 1.5032 1.879 1.63 1.67
Unvanquished Resolution: 1920 x 1080 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: High b a 60 120 180 240 300 263.3 257.0
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a b 0.3472 0.6944 1.0416 1.3888 1.736 1.50644 1.54322 MIN: 1.38 MIN: 1.37 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
JPEG XL Decoding libjxl CPU Threads: 1 OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: 1 b a 10 20 30 40 50 45.67 44.60
Apache Spark Row Count: 1000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Group By Test Time a b 1.0755 2.151 3.2265 4.302 5.3775 4.67 4.78
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU a b 14 28 42 56 70 63.47 62.04 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Stress-NG Test: Socket Activity OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Socket Activity b a 3K 6K 9K 12K 15K 15707.68 15355.92 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU a b 40 80 120 160 200 188.92 193.23 MIN: 155.98 / MAX: 237.74 MIN: 153.71 / MAX: 246.32 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Stress-NG Test: NUMA OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: NUMA a b 130 260 390 520 650 604.44 591.09 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
AOM AV1 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p b a 14 28 42 56 70 62.08 60.72 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
NCNN Target: CPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface a b 0.8438 1.6876 2.5314 3.3752 4.219 3.67 3.75 MIN: 3.59 / MAX: 4.3 MIN: 3.54 / MAX: 4.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b 2 4 6 8 10 8.046 7.877 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Facebook RocksDB Test: Random Read OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Read b a 20M 40M 60M 80M 100M 100100766 98049490 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Numenta Anomaly Benchmark Detector: Windowed Gaussian OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Windowed Gaussian a b 2 4 6 8 10 6.199 6.327
Numenta Anomaly Benchmark Detector: Bayesian Changepoint OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Bayesian Changepoint a b 5 10 15 20 25 20.15 20.56
Stargate Digital Audio Workstation Sample Rate: 96000 - Buffer Size: 512 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 96000 - Buffer Size: 512 b a 0.4382 0.8764 1.3146 1.7528 2.191 1.947376 1.911816 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenFOAM Input: drivaerFastback, Small Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time a b 30 60 90 120 150 121.46 123.71 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
ClickHouse 100M Rows Web Analytics Dataset, Third Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Third Run a b 50 100 150 200 250 240.17 235.82 MIN: 17.07 / MAX: 6666.67 MIN: 17.11 / MAX: 3750 1. ClickHouse server version 22.5.4.19 (official build).
Stress-NG Test: Matrix Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Matrix Math a b 30K 60K 90K 120K 150K 140597.25 138081.21 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenRadioss Model: Bumper Beam OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam a b 20 40 60 80 100 98.86 100.60
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 8 - Input: Bosphorus 1080p b a 30 60 90 120 150 113.86 111.93 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
GraphicsMagick Operation: Rotate OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate b a 140 280 420 560 700 649 638 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
nginx Connections: 100 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 100 a b 30K 60K 90K 120K 150K 148017.08 145523.99 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
AOM AV1 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p b a 16 32 48 64 80 73.76 72.52 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 13 - Input: Bosphorus 4K b a 30 60 90 120 150 148.19 145.75 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Stress-NG Test: Glibc C String Functions OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Glibc C String Functions a b 900K 1800K 2700K 3600K 4500K 4187483.04 4118591.13 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b 2 4 6 8 10 6.06539 6.16553 MIN: 3.72 MIN: 3.99 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Stress-NG Test: Context Switching OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Context Switching b a 2M 4M 6M 8M 10M 9774697.61 9618965.78 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stargate Digital Audio Workstation Sample Rate: 44100 - Buffer Size: 1024 OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 44100 - Buffer Size: 1024 b a 1.1941 2.3882 3.5823 4.7764 5.9705 5.307245 5.223813 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b 0.4722 0.9444 1.4166 1.8888 2.361 2.06629 2.09863 MIN: 1.98 MIN: 1.97 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenFOAM Input: drivaerFastback, Small Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time a b 6 12 18 24 30 26.87 27.28 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 10, Lossless b a 1.1594 2.3188 3.4782 4.6376 5.797 5.078 5.153 1. (CXX) g++ options: -O3 -fPIC -lm
SMHasher Hash: FarmHash32 x86_64 AVX OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: FarmHash32 x86_64 AVX a b 6K 12K 18K 24K 30K 27249.95 26861.97 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark a b 16 32 48 64 80 68.96 69.93
AOM AV1 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K b a 9 18 27 36 45 40.93 40.37 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
SMHasher Hash: t1ha0_aes_avx2 x86_64 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: t1ha0_aes_avx2 x86_64 a b 14K 28K 42K 56K 70K 66642.16 65731.67 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16 - Device: CPU a b 5 10 15 20 25 18.11 18.36 MIN: 10.34 / MAX: 38.17 MIN: 15.89 / MAX: 37.21 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Facebook RocksDB Test: Random Fill OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill a b 200K 400K 600K 800K 1000K 811216 800273 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Stress-NG Test: Forking OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Forking a b 11K 22K 33K 44K 55K 53674.14 52953.36 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency a b 0.0169 0.0338 0.0507 0.0676 0.0845 0.074 0.075 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16 - Device: CPU a b 140 280 420 560 700 661.93 653.16 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
WebP Image Encode Encode Settings: Quality 100, Lossless OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Lossless b a 0.342 0.684 1.026 1.368 1.71 1.52 1.50 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
AOM AV1 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p a b 8 16 24 32 40 34.60 34.15 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Unpacking The Linux Kernel linux-5.19.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel 5.19 linux-5.19.tar.xz a b 2 4 6 8 10 7.006 7.096
JPEG XL libjxl Input: JPEG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 80 b a 2 4 6 8 10 8.70 8.59 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
NCNN Target: CPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 a b 8 16 24 32 40 33.71 34.14 MIN: 32.75 / MAX: 34.65 MIN: 32.66 / MAX: 42.06 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
GraphicsMagick Operation: Swirl OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Swirl b a 300 600 900 1200 1500 1193 1178 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Stream Type: Copy OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Copy b a 12K 24K 36K 48K 60K 55314.0 54649.6 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
SMHasher Hash: wyhash OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: wyhash a b 5K 10K 15K 20K 25K 23068.13 22791.83 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
Facebook RocksDB Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read While Writing a b 1000K 2000K 3000K 4000K 5000K 4506140 4452658 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
CockroachDB Workload: MoVR - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 1024 b a 100 200 300 400 500 480.5 474.9
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Only a b 140K 280K 420K 560K 700K 671452 663640 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
CockroachDB Workload: MoVR - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 128 b a 100 200 300 400 500 483.1 477.5
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Upload b a 40 80 120 160 200 181.78 183.91 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
AOM AV1 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K b a 9 18 27 36 45 40.76 40.29 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Upload b a 4 8 12 16 20 13.89 13.73 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenRadioss Model: Rubber O-Ring Seal Installation OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Rubber O-Ring Seal Installation a b 20 40 60 80 100 78.18 79.08
JPEG XL libjxl Input: PNG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 90 b a 3 6 9 12 15 8.99 8.89 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
spaCy Model: en_core_web_lg OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_lg a b 3K 6K 9K 12K 15K 12454 12319
nginx Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 1000 b a 30K 60K 90K 120K 150K 126633.00 125263.22 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 b a 0.6872 1.3744 2.0616 2.7488 3.436 3.021 3.054 MIN: 2.99 / MAX: 3.17 MIN: 3.02 / MAX: 3.11 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: AlexNet b a 14 28 42 56 70 63.15 62.47
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Medium OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Medium b a 60 120 180 240 300 262.8 260.0
Scikit-Learn Benchmark: TSNE MNIST Dataset OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.1.3 Benchmark: TSNE MNIST Dataset a b 7 14 21 28 35 31.59 31.93
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 5K 10K 15K 20K 25K 21060.95 20837.59 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
AOM AV1 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p b a 0.2183 0.4366 0.6549 0.8732 1.0915 0.97 0.96 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
CockroachDB Workload: KV, 60% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 512 a b 20K 40K 60K 80K 100K 80024.0 79220.4
CockroachDB Workload: KV, 50% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 256 a b 17K 34K 51K 68K 85K 77546.7 76768.6
EnCodec Target Bandwidth: 1.5 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 1.5 kbps a b 8 16 24 32 40 36.01 36.37
Scikit-Learn Benchmark: MNIST Dataset OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.1.3 Benchmark: MNIST Dataset a b 20 40 60 80 100 103.90 104.93
FFmpeg Encoder: libx264 - Scenario: Live OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Live b a 6 12 18 24 30 25.19 25.43 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
AOM AV1 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p b a 16 32 48 64 80 73.28 72.59 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16 - Device: CPU a b 80 160 240 320 400 380.28 376.70 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
FFmpeg Encoder: libx264 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Live b a 40 80 120 160 200 200.46 198.58 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
CockroachDB Workload: MoVR - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 512 b a 100 200 300 400 500 481.1 476.6
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU b a 0.9765 1.953 2.9295 3.906 4.8825 4.34 4.30 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16 - Device: CPU a b 7 14 21 28 35 31.53 31.82 MIN: 16.43 / MAX: 53.16 MIN: 20.13 / MAX: 53.75 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
ClickHouse 100M Rows Web Analytics Dataset, Second Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Second Run a b 50 100 150 200 250 236.78 234.64 MIN: 16.92 / MAX: 12000 MIN: 17.06 / MAX: 12000 1. ClickHouse server version 22.5.4.19 (official build).
FLAC Audio Encoding WAV To FLAC OpenBenchmarking.org Seconds, Fewer Is Better FLAC Audio Encoding 1.4 WAV To FLAC a b 4 8 12 16 20 17.80 17.96 1. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
Dragonflydb Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 5:1 a b 700K 1400K 2100K 2800K 3500K 3193072.56 3164997.08 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 0.2565 0.513 0.7695 1.026 1.2825 1.13 1.14 MIN: 0.62 / MAX: 11.19 MIN: 0.63 / MAX: 12.31 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
CockroachDB Workload: MoVR - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: MoVR - Concurrency: 256 b a 100 200 300 400 500 480.7 476.5
WebP Image Encode Encode Settings: Quality 100, Highest Compression OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Highest Compression a b 0.774 1.548 2.322 3.096 3.87 3.44 3.41 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU b a 150 300 450 600 750 692.14 686.15 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
CockroachDB Workload: KV, 50% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 128 a b 15K 30K 45K 60K 75K 70558.3 69948.7
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU b a 4 8 12 16 20 17.32 17.47 MIN: 10.28 / MAX: 33.98 MIN: 14.95 / MAX: 28.95 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 a b 2 4 6 8 10 6.491 6.547 MIN: 6.43 / MAX: 6.55 MIN: 6.49 / MAX: 6.68 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
JPEG XL libjxl Input: JPEG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 90 b a 2 4 6 8 10 8.58 8.51 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream b a 2 4 6 8 10 8.7988 8.8688
EnCodec Target Bandwidth: 6 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 6 kbps a b 9 18 27 36 45 37.60 37.90
Numenta Anomaly Benchmark Detector: Relative Entropy OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Relative Entropy b a 3 6 9 12 15 13.11 13.21
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream b a 30 60 90 120 150 113.55 112.66
JPEG XL libjxl Input: PNG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 80 b a 3 6 9 12 15 9.02 8.95 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenRadioss Model: INIVOL and Fluid Structure Interaction Drop Container OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: INIVOL and Fluid Structure Interaction Drop Container a b 60 120 180 240 300 287.95 290.20
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU b a 600 1200 1800 2400 3000 2714.24 2734.68 MIN: 1614.04 / MAX: 3194.85 MIN: 1593.76 / MAX: 3225.01 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Timed Erlang/OTP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Erlang/OTP Compilation 25.0 Time To Compile b a 20 40 60 80 100 90.25 90.93
Stress-NG Test: SENDFILE OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: SENDFILE a b 90K 180K 270K 360K 450K 414239.19 411184.40 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
CockroachDB Workload: KV, 10% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 128 a b 11K 22K 33K 44K 55K 50479.2 50108.8
SMHasher Hash: FarmHash128 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: FarmHash128 a b 3K 6K 9K 12K 15K 15823.93 15708.02 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
Redis Test: SET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 b a 400K 800K 1200K 1600K 2000K 1797429.5 1784408.0 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Stress-NG Test: System V Message Passing OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: System V Message Passing a b 2M 4M 6M 8M 10M 9343696.19 9276368.69 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stream Type: Add OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Add b a 8K 16K 24K 32K 40K 37116.0 36849.4 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.7.10.9513 Pi Digits To Calculate: 500M a b 3 6 9 12 15 11.14 11.22
AOM AV1 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p b a 4 8 12 16 20 14.23 14.13 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Stress-NG Test: Malloc OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Malloc a b 11M 22M 33M 44M 55M 50825608.57 50473178.15 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
SMHasher Hash: t1ha2_atonce OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: t1ha2_atonce a b 3K 6K 9K 12K 15K 15569.87 15465.65 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
CockroachDB Workload: KV, 10% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 512 a b 13K 26K 39K 52K 65K 61112 60710
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 12 - Input: Bosphorus 1080p b a 90 180 270 360 450 403.43 400.81 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
EnCodec Target Bandwidth: 3 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 3 kbps a b 9 18 27 36 45 38.19 38.43
Xmrig Variant: Wownero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Wownero - Hash Count: 1M a b 5K 10K 15K 20K 25K 21153.3 21018.6 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Facebook RocksDB Test: Random Fill Sync OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill Sync b a 5K 10K 15K 20K 25K 25297 25136 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Stream Type: Triad OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Triad b a 8K 16K 24K 32K 40K 37021.8 36787.1 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
Dragonflydb Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:1 a b 700K 1400K 2100K 2800K 3500K 3341824.71 3320752.26 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
SMHasher Hash: MeowHash x86_64 AES-NI OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: MeowHash x86_64 AES-NI a b 8K 16K 24K 32K 40K 37533.1 37302.0 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 5K 10K 15K 20K 25K 22553.27 22417.03 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
AOM AV1 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p b a 10 20 30 40 50 43.34 43.08 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenRadioss Model: Cell Phone Drop Test OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Cell Phone Drop Test a b 12 24 36 48 60 54.86 55.19
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 4 - Input: Bosphorus 4K b a 0.7922 1.5844 2.3766 3.1688 3.961 3.521 3.500 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating b a 40K 80K 120K 160K 200K 168680 167681 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
AOM AV1 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K b a 7 14 21 28 35 32.16 31.97 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
CockroachDB Workload: KV, 95% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 256 a b 20K 40K 60K 80K 100K 100761.5 100166.8
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 18.8 Time To Compile a b 50 100 150 200 250 232.35 233.72
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Detection FP32 - Device: CPU b a 600 1200 1800 2400 3000 2734.21 2750.06 MIN: 2202.17 / MAX: 3150.66 MIN: 2288.66 / MAX: 3108.15 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
SMHasher Hash: fasthash32 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: fasthash32 a b 1400 2800 4200 5600 7000 6591.27 6553.70 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 107.48 106.87
FFmpeg Encoder: libx264 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Upload a b 3 6 9 12 15 12.37 12.30 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream a b 13 26 39 52 65 59.87 59.54
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 111.39 112.01
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream a b 4 8 12 16 20 16.69 16.78
FFmpeg Encoder: libx264 - Scenario: Upload OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Upload a b 50 100 150 200 250 204.11 205.24 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
EnCodec Target Bandwidth: 24 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 24 kbps a b 10 20 30 40 50 42.08 42.31
ASTC Encoder Preset: Fast OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast a b 80 160 240 320 400 366.33 364.32 1. (CXX) g++ options: -O3 -flto -pthread
OpenVKL Benchmark: vklBenchmark Scalar OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark Scalar a b 40 80 120 160 200 184 183 MIN: 18 / MAX: 3346 MIN: 18 / MAX: 3314
WebP Image Encode Encode Settings: Quality 100 OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100 a b 3 6 9 12 15 11.10 11.04 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU a b 4 8 12 16 20 16.75 16.84 MIN: 11.9 / MAX: 43.39 MIN: 10.79 / MAX: 39.98 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenRadioss Model: Bird Strike on Windshield OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bird Strike on Windshield a b 40 80 120 160 200 173.48 174.41
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU a b 150 300 450 600 750 715.98 712.18 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenFOAM Input: motorBike - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: motorBike - Mesh Time a b 9 18 27 36 45 40.37 40.59 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 0 b a 20 40 60 80 100 90.56 91.04 1. (CXX) g++ options: -O3 -fPIC -lm
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b 3 6 9 12 15 13.39 13.32
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b 20 40 60 80 100 74.70 75.08
WebP Image Encode Encode Settings: Default OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Default a b 4 8 12 16 20 17.87 17.78 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
Stress-NG Test: Memory Copying OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Memory Copying a b 1000 2000 3000 4000 5000 4863.37 4839.31 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Pabellon Barcelona - Compute: CPU-Only a b 40 80 120 160 200 176.40 177.27
WebP2 Image Encode Encode Settings: Quality 100, Compression Effort 5 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 100, Compression Effort 5 b a 2 4 6 8 10 8.43 8.39 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6, Lossless b a 2 4 6 8 10 7.571 7.607 1. (CXX) g++ options: -O3 -fPIC -lm
CockroachDB Workload: KV, 50% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 512 a b 16K 32K 48K 64K 80K 75643.3 75289.1
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 152.23 152.94
Timed Linux Kernel Compilation Build: allmodconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: allmodconfig a b 100 200 300 400 500 476.67 478.89
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP32 - Device: CPU a b 0.9743 1.9486 2.9229 3.8972 4.8715 4.33 4.31 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MEMFD a b 200 400 600 800 1000 891.86 887.76 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Numenta Anomaly Benchmark Detector: Contextual Anomaly Detector OSE OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Contextual Anomaly Detector OSE a b 9 18 27 36 45 38.98 39.16
Dragonflydb Clients: 200 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:5 a b 800K 1600K 2400K 3200K 4000K 3609156.11 3593257.69 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM a b 40 80 120 160 200 162.9 162.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.32.6 VGR Performance Metric a b 90K 180K 270K 360K 450K 405453 403757 1. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Video On Demand b a 60 120 180 240 300 263.53 264.63 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
rav1e Speed: 6 OpenBenchmarking.org Frames Per Second, More Is Better rav1e 0.6.1 Speed: 6 a b 0.8226 1.6452 2.4678 3.2904 4.113 3.656 3.641
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency b a 0.3852 0.7704 1.1556 1.5408 1.926 1.705 1.712 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Stream Type: Scale OpenBenchmarking.org MB/s, More Is Better Stream 2013-01-17 Type: Scale b a 7K 14K 21K 28K 35K 33173.5 33038.0 1. (CC) gcc options: -mcmodel=medium -O3 -march=native -fopenmp
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM a b 80 160 240 320 400 376.6 375.1 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
CockroachDB Workload: KV, 60% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 128 b a 17K 34K 51K 68K 85K 77363.2 77055.3
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.7.10.9513 Pi Digits To Calculate: 1B a b 5 10 15 20 25 22.83 22.92
nekRS Input: TurboPipe Periodic OpenBenchmarking.org FLOP/s, More Is Better nekRS 22.0 Input: TurboPipe Periodic a b 14000M 28000M 42000M 56000M 70000M 67648400000 67381400000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Video On Demand b a 7 14 21 28 35 28.74 28.63 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
AOM AV1 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K a b 2 4 6 8 10 7.91 7.88 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
FFmpeg Encoder: libx264 - Scenario: Platform OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Platform a b 40 80 120 160 200 160.71 161.32 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Barbershop - Compute: CPU-Only a b 130 260 390 520 650 583.81 586.00
CockroachDB Workload: KV, 50% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 50% Reads - Concurrency: 1024 b a 15K 30K 45K 60K 75K 72222.0 71952.1
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b 130 260 390 520 650 608.48 610.75
PostgreSQL Scaling Factor: 100 - Clients: 50 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write b a 6K 12K 18K 24K 30K 29318 29212 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
FFmpeg Encoder: libx264 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Platform a b 11 22 33 44 55 47.13 46.96 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
CockroachDB Workload: KV, 95% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 1024 a b 20K 40K 60K 80K 100K 94157.8 93822.1
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b 13 26 39 52 65 56.69 56.49 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 78.72 78.44
GraphicsMagick Operation: Enhanced OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced b a 120 240 360 480 600 570 568 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenVKL Benchmark: vklBenchmark ISPC OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark ISPC a b 60 120 180 240 300 288 287 MIN: 37 / MAX: 3639 MIN: 37 / MAX: 3621
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b 1.3319 2.6638 3.9957 5.3276 6.6595 5.89944 5.91961 MIN: 5.84 MIN: 5.83 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
SMHasher Hash: Spooky32 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: Spooky32 a b 3K 6K 9K 12K 15K 14254.70 14206.73 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM a b 30 60 90 120 150 150.6 150.1 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM b a 90 180 270 360 450 403.8 402.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
Dragonflydb Clients: 200 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 5:1 a b 700K 1400K 2100K 2800K 3500K 3273370.07 3263145.89 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM b a 40 80 120 160 200 160.7 160.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
CockroachDB Workload: KV, 60% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 1024 a b 16K 32K 48K 64K 80K 76564.0 76335.4
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b 5 10 15 20 25 19.64 19.58
Stress-NG Test: MMAP OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MMAP a b 80 160 240 320 400 383.32 382.19 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
ASTC Encoder Preset: Exhaustive OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Exhaustive a b 0.3541 0.7082 1.0623 1.4164 1.7705 1.5739 1.5693 1. (CXX) g++ options: -O3 -flto -pthread
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM b a 20 40 60 80 100 107.1 106.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
FFmpeg Encoder: libx264 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Video On Demand a b 11 22 33 44 55 47.17 47.04 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Video On Demand OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Video On Demand a b 40 80 120 160 200 160.60 161.04 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
rav1e Speed: 10 OpenBenchmarking.org Frames Per Second, More Is Better rav1e 0.6.1 Speed: 10 a b 2 4 6 8 10 8.018 7.997
rav1e Speed: 1 OpenBenchmarking.org Frames Per Second, More Is Better rav1e 0.6.1 Speed: 1 a b 0.1755 0.351 0.5265 0.702 0.8775 0.780 0.778
rav1e Speed: 5 OpenBenchmarking.org Frames Per Second, More Is Better rav1e 0.6.1 Speed: 5 b a 0.6183 1.2366 1.8549 2.4732 3.0915 2.748 2.741
Numenta Anomaly Benchmark Detector: Earthgecko Skyline OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: Earthgecko Skyline a b 20 40 60 80 100 77.22 77.41
Stress-NG Test: Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Vector Math a b 40K 80K 120K 160K 200K 178859.70 178430.01 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe a b 0.9473 1.8946 2.8419 3.7892 4.7365 4.20 4.21
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU b a 2 4 6 8 10 8.48 8.46 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
nginx Connections: 200 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 200 a b 30K 60K 90K 120K 150K 148446.23 148101.85 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
AOM AV1 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K b a 3 6 9 12 15 13.57 13.54 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
CockroachDB Workload: KV, 60% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 60% Reads - Concurrency: 256 b a 20K 40K 60K 80K 100K 82467.0 82286.1
Stress-NG Test: Crypto OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Crypto a b 10K 20K 30K 40K 50K 44751.55 44653.67 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Fishy Cat - Compute: CPU-Only a b 15 30 45 60 75 69.44 69.59
CockroachDB Workload: KV, 95% Reads - Concurrency: 512 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 512 a b 20K 40K 60K 80K 100K 98031.9 97822.4
CockroachDB Workload: KV, 10% Reads - Concurrency: 1024 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 1024 a b 13K 26K 39K 52K 65K 59301.4 59175.4
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Platform b a 7 14 21 28 35 28.68 28.62 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Timed Wasmer Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile a b 11 22 33 44 55 46.86 46.96 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
ASTC Encoder Preset: Thorough OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Thorough a b 4 8 12 16 20 14.41 14.38 1. (CXX) g++ options: -O3 -flto -pthread
Timed PHP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 8.1.9 Time To Compile a b 10 20 30 40 50 43.61 43.70
ASTC Encoder Preset: Medium OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Medium a b 30 60 90 120 150 120.33 120.08 1. (CXX) g++ options: -O3 -flto -pthread
Stress-NG Test: Mutex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Mutex b a 3M 6M 9M 12M 15M 12810594.01 12785778.73 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Platform b a 60 120 180 240 300 264.16 264.66 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b 5 10 15 20 25 19.56 19.52
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM a b 80 160 240 320 400 376.6 375.9 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Classroom - Compute: CPU-Only a b 30 60 90 120 150 148.83 149.10
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 2 a b 11 22 33 44 55 47.56 47.64 1. (CXX) g++ options: -O3 -fPIC -lm
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b 20 40 60 80 100 74.92 75.03
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b 3 6 9 12 15 13.35 13.33
SMHasher Hash: SHA3-256 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: SHA3-256 a b 30 60 90 120 150 146.70 146.47 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream b a 130 260 390 520 650 610.93 611.89
NCNN Target: CPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m a b 6 12 18 24 30 26.61 26.65 MIN: 25.84 / MAX: 27.71 MIN: 26.08 / MAX: 28.34 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU b a 300 600 900 1200 1500 1408.60 1410.67 MIN: 1373.05 / MAX: 1468.03 MIN: 1372.29 / MAX: 1527.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Timed CPython Compilation Build Configuration: Released Build, PGO + LTO Optimized OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Released Build, PGO + LTO Optimized a b 60 120 180 240 300 257.57 257.95
GraphicsMagick Operation: Resizing OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing b a 500 1000 1500 2000 2500 2132 2129 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
spaCy Model: en_core_web_trf OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_trf a b 300 600 900 1200 1500 1465 1463
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: GoogLeNet a b 10 20 30 40 50 44.45 44.39
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 6.1 Build: defconfig a b 10 20 30 40 50 43.94 44.00
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU b a 200 400 600 800 1000 844.82 843.69 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b 50 100 150 200 250 222.16 221.88
Timed CPython Compilation Build Configuration: Default OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Default b a 4 8 12 16 20 15.41 15.43
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b 12 24 36 48 60 53.96 54.02
NCNN Target: CPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet b a 2 4 6 8 10 8.90 8.91 MIN: 8.58 / MAX: 9.7 MIN: 8.78 / MAX: 10.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU b a 3 6 9 12 15 9.15733 9.16751 MIN: 8.99 MIN: 9.09 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
CockroachDB Workload: KV, 10% Reads - Concurrency: 256 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 10% Reads - Concurrency: 256 b a 13K 26K 39K 52K 65K 61278.6 61211.1
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Live a b 16 32 48 64 80 72.95 73.03 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b 80 160 240 320 400 359.56 359.18 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU b a 7 14 21 28 35 28.40 28.43 MIN: 14.44 / MAX: 59.77 MIN: 14.41 / MAX: 59.57 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Dragonflydb Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:5 a b 800K 1600K 2400K 3200K 4000K 3519377.56 3515736.22 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Scikit-Learn Benchmark: Sparse Random Projections, 100 Iterations OpenBenchmarking.org Seconds, Fewer Is Better Scikit-Learn 1.1.3 Benchmark: Sparse Random Projections, 100 Iterations a b 40 80 120 160 200 162.42 162.59
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Live a b 15 30 45 60 75 69.22 69.15 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6 b a 0.9027 1.8054 2.7081 3.6108 4.5135 4.008 4.012 1. (CXX) g++ options: -O3 -fPIC -lm
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b 3 6 9 12 15 11.94 11.96
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b 20 40 60 80 100 83.67 83.59
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 154.89 154.76
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 77.45 77.51
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: BMW27 - Compute: CPU-Only a b 12 24 36 48 60 53.91 53.95
Stress-NG Test: Atomic OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Atomic b a 90K 180K 270K 360K 450K 421495.46 421253.99 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Xmrig Variant: Monero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Monero - Hash Count: 1M a b 3K 6K 9K 12K 15K 15342.8 15334.8 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
CockroachDB Workload: KV, 95% Reads - Concurrency: 128 OpenBenchmarking.org ops/s, More Is Better CockroachDB 22.2 Workload: KV, 95% Reads - Concurrency: 128 b a 20K 40K 60K 80K 100K 103111.7 103073.5
Stress-NG Test: Glibc Qsort Data Sorting OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Glibc Qsort Data Sorting a b 80 160 240 320 400 385.80 385.67 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Semaphores OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Semaphores a b 1000K 2000K 3000K 4000K 5000K 4869829.14 4868774.73 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenFOAM Input: motorBike - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: motorBike - Execution Time a b 16 32 48 64 80 71.60 71.61 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
Dragonflydb Clients: 200 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:1 b a 700K 1400K 2100K 2800K 3500K 3388656.73 3388128.93 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship b a 0.945 1.89 2.835 3.78 4.725 4.2 4.2
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 0.2385 0.477 0.7155 0.954 1.1925 1.06 1.06 MIN: 0.6 / MAX: 11.35 MIN: 0.6 / MAX: 12.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
AOM AV1 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K b a 0.0765 0.153 0.2295 0.306 0.3825 0.34 0.34 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
GraphicsMagick Operation: Sharpen OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen b a 80 160 240 320 400 376 376 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM b a 13 26 39 52 65 60.2 60.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
WebP2 Image Encode Encode Settings: Quality 95, Compression Effort 7 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 95, Compression Effort 7 b a 0.0338 0.0676 0.1014 0.1352 0.169 0.15 0.15 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 75, Compression Effort 7 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 75, Compression Effort 7 b a 0.0675 0.135 0.2025 0.27 0.3375 0.30 0.30 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP Image Encode Encode Settings: Quality 100, Lossless, Highest Compression OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Lossless, Highest Compression b a 0.1373 0.2746 0.4119 0.5492 0.6865 0.61 0.61 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
JPEG XL libjxl Input: JPEG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 100 b a 0.1485 0.297 0.4455 0.594 0.7425 0.66 0.66 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: PNG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 100 b a 0.153 0.306 0.459 0.612 0.765 0.68 0.68 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
SMHasher Hash: MeowHash x86_64 AES-NI OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: MeowHash x86_64 AES-NI a b 13 26 39 52 65 57.72 57.93 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: t1ha0_aes_avx2 x86_64 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: t1ha0_aes_avx2 x86_64 a b 8 16 24 32 40 34.72 35.16 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: FarmHash32 x86_64 AVX OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: FarmHash32 x86_64 AVX a b 10 20 30 40 50 43.28 43.29 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: t1ha2_atonce OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: t1ha2_atonce a b 8 16 24 32 40 34.76 35.02 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: FarmHash128 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: FarmHash128 a b 14 28 42 56 70 63.80 63.86 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: fasthash32 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: fasthash32 a b 9 18 27 36 45 37.18 37.26 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: Spooky32 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: Spooky32 b a 12 24 36 48 60 51.63 51.66 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: SHA3-256 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: SHA3-256 b a 600 1200 1800 2400 3000 2649.30 2649.38 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: wyhash OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: wyhash a b 6 12 18 24 30 26.33 26.39 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
Phoronix Test Suite v10.8.5