3900xt-november AMD Ryzen 9 3900XT 12-Core testing with a MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2211180-SYST-3900XTN38&grr .
3900xt-november Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution a aa b AMD Ryzen 9 3900XT 12-Core @ 3.80GHz (12 Cores / 24 Threads) MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) AMD Starship/Matisse 16GB 500GB Seagate FireCuda 520 SSD ZP500GM30002 AMD Radeon RX 56/64 8GB (1630/945MHz) AMD Vega 10 HDMI Audio ASUS MG28U Realtek Device 2600 + Realtek Killer E3000 2.5GbE + Intel Wi-Fi 6 AX200 Ubuntu 22.04 5.15.0-47-generic (x86_64) GNOME Shell 42.2 X Server + Wayland 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.42) 1.3.204 GCC 11.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701021 Graphics Details - BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: 113-D0500100-102 Python Details - Python 3.10.4 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
3900xt-november tensorflow: CPU - 512 - GoogLeNet smhasher: SHA3-256 smhasher: SHA3-256 nekrs: TurboPipe Periodic openradioss: INIVOL and Fluid Structure Interaction Drop Container tensorflow: CPU - 256 - GoogLeNet tensorflow: CPU - 512 - AlexNet tensorflow: CPU - 64 - ResNet-50 jpegxl: JPEG - 100 jpegxl: PNG - 100 tensorflow: CPU - 32 - ResNet-50 minibude: OpenMP - BM2 minibude: OpenMP - BM2 openradioss: Bird Strike on Windshield ffmpeg: libx264 - Upload ffmpeg: libx264 - Upload ffmpeg: libx265 - Platform ffmpeg: libx265 - Platform ffmpeg: libx265 - Video On Demand ffmpeg: libx265 - Video On Demand tensorflow: CPU - 256 - AlexNet openfoam: drivaerFastback, Small Mesh Size - Execution Time openfoam: drivaerFastback, Small Mesh Size - Mesh Time tensorflow: CPU - 64 - GoogLeNet ffmpeg: libx265 - Upload ffmpeg: libx265 - Upload ffmpeg: libx264 - Platform ffmpeg: libx264 - Platform ffmpeg: libx264 - Video On Demand ffmpeg: libx264 - Video On Demand tensorflow: CPU - 16 - ResNet-50 openradioss: Rubber O-Ring Seal Installation jpegxl: JPEG - 80 jpegxl: PNG - 80 avifenc: 0 openradioss: Bumper Beam xmrig: Monero - 1M tensorflow: CPU - 32 - GoogLeNet openradioss: Cell Phone Drop Test jpegxl: JPEG - 90 jpegxl: PNG - 90 aom-av1: Speed 4 Two-Pass - Bosphorus 4K cpuminer-opt: Garlicoin xmrig: Wownero - 1M ffmpeg: libx265 - Live ffmpeg: libx265 - Live onednn: Recurrent Neural Network Training - f32 - CPU aom-av1: Speed 0 Two-Pass - Bosphorus 4K onednn: Recurrent Neural Network Inference - f32 - CPU tensorflow: CPU - 64 - AlexNet libplacebo: av1_grain_lap libplacebo: hdr_lut libplacebo: hdr_peakdetect libplacebo: polar_nocompute libplacebo: deband_heavy cpuminer-opt: Blake-2 S onednn: Deconvolution Batch shapes_1d - f32 - CPU avifenc: 2 aom-av1: Speed 6 Two-Pass - Bosphorus 4K spacy: en_core_web_trf spacy: en_core_web_lg tensorflow: CPU - 16 - GoogLeNet jpegxl-decode: 1 nginx: 1000 nginx: 500 nginx: 200 nginx: 100 minibude: OpenMP - BM1 minibude: OpenMP - BM1 aom-av1: Speed 4 Two-Pass - Bosphorus 1080p tensorflow: CPU - 32 - AlexNet y-cruncher: 1B ffmpeg: libx264 - Live ffmpeg: libx264 - Live smhasher: FarmHash128 smhasher: FarmHash128 onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU smhasher: MeowHash x86_64 AES-NI smhasher: MeowHash x86_64 AES-NI jpegxl-decode: All smhasher: Spooky32 smhasher: Spooky32 deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream tensorflow: CPU - 16 - AlexNet deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream encodec: 24 kbps deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream cpuminer-opt: Skeincoin cpuminer-opt: Triple SHA-256, Onecoin cpuminer-opt: Myriad-Groestl cpuminer-opt: Quad SHA-256, Pyrite cpuminer-opt: x25x cpuminer-opt: Magi cpuminer-opt: LBC, LBRY Credits cpuminer-opt: Ringcoin cpuminer-opt: Deepcoin cpuminer-opt: scrypt deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream encodec: 6 kbps encodec: 3 kbps aom-av1: Speed 0 Two-Pass - Bosphorus 1080p encodec: 1.5 kbps deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream aom-av1: Speed 6 Realtime - Bosphorus 4K smhasher: t1ha2_atonce smhasher: t1ha2_atonce smhasher: t1ha0_aes_avx2 x86_64 smhasher: t1ha0_aes_avx2 x86_64 deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream encode-flac: WAV To FLAC deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream y-cruncher: 500M quadray: 5 - 4K quadray: 2 - 4K quadray: 3 - 4K quadray: 1 - 4K quadray: 5 - 1080p quadray: 2 - 1080p quadray: 3 - 1080p quadray: 1 - 1080p aom-av1: Speed 6 Two-Pass - Bosphorus 1080p aom-av1: Speed 8 Realtime - Bosphorus 4K stress-ng: Context Switching stress-ng: CPU Cache stress-ng: MEMFD stress-ng: Futex stress-ng: NUMA stress-ng: System V Message Passing stress-ng: Mutex stress-ng: Memory Copying stress-ng: Socket Activity stress-ng: Matrix Math stress-ng: Malloc stress-ng: Semaphores stress-ng: Forking stress-ng: Crypto stress-ng: MMAP stress-ng: IO_uring stress-ng: Atomic stress-ng: Glibc Qsort Data Sorting stress-ng: CPU Stress stress-ng: Glibc C String Functions stress-ng: Vector Math stress-ng: SENDFILE onednn: IP Shapes 1D - f32 - CPU aom-av1: Speed 9 Realtime - Bosphorus 4K aom-av1: Speed 10 Realtime - Bosphorus 4K aom-av1: Speed 6 Realtime - Bosphorus 1080p smhasher: fasthash32 smhasher: fasthash32 avifenc: 6, Lossless onednn: IP Shapes 3D - f32 - CPU smhasher: FarmHash32 x86_64 AVX smhasher: FarmHash32 x86_64 AVX aom-av1: Speed 8 Realtime - Bosphorus 1080p smhasher: wyhash smhasher: wyhash onednn: Convolution Batch Shapes Auto - f32 - CPU avifenc: 6 aom-av1: Speed 9 Realtime - Bosphorus 1080p aom-av1: Speed 10 Realtime - Bosphorus 1080p avifenc: 10, Lossless onednn: Deconvolution Batch shapes_3d - f32 - CPU a aa b 28.79 2598.05 149.08 31559900000 633.97 28.64 110.36 10.09 0.68 0.69 10.29 16.394 409.855 285.69 12.64 199.730241377 31.31 241.928872179 31.45 240.839641179 109.23 363.21657 41.823568 30.55 15.32 164.797431657 48.84 155.11 48.83 155.137732285 10.39 131 8.68 9.08 130.357 127.42 7982.6 30.46 104.51 8.62 9.02 7.41 1939.65 10466.6 75.38 66.99 4075.57 0.25 2503.12 93.8 2073.69 2797.23 2610.2 977.72 476.59 524680 8.2274 63.121 12.67 832 12196 31.89 48.69 65135.63 75259.81 83513.14 87222.68 16.233 405.816 15.08 76.22 41.524 204.75 24.66 64.103 15884.24 1.8817 57.74 37183.28 146.63 49.056 14909.02 180.6052 33.2144 55.63 611.8404 9.7748 54.416 146.4793 40.9537 617.6457 9.7133 106880 204970 21130 102170 631.15 603.58 31260 2705.27 11710 226.42 48.4869 20.6208 47.326 47.231 0.73 45.14 124.7919 8.0129 125.3455 7.9776 33.3853 29.9481 23.1 32.254 16686.57 34.811 66796.46 73.7085 81.374 113.4603 52.8476 17.255 53.4102 112.2815 16.8159 59.4485 23.5574 42.4309 12.7499 78.3889 19.547 0.66 2.89 2.46 10.21 2.64 11.12 9.71 39.48 41.57 34.46 4076744.52 158.89 747.31 2600010.05 260.68 7925418.15 6319425.86 3599.42 8595.2 61078.06 13779284.62 2463565.68 40818.92 22523.79 294.05 5126.99 575564.96 180.77 32900.39 2048965.25 90999.65 215308.14 4.69026 47.42 47.55 45.2 36.902 6658.92 10.149 12.0602 40.069 29093.03 85.22 25.309 23915.75 22.5449 6.049 108.47 109.52 5.48 5.27435 28.71 2620.221 148.14 32575200000 650.25 28.67 110.49 10.06 0.68 0.69 10.29 16.412 410.307 286.84 12.64 199.81 31.49 240.520175955 31.62 239.59 109.38 361.47957 41.852307 30.51 15.33 164.72 49.01 154.57 48.85 155.058138517 10.44 130.82 8.98 9.13 131.57 126.51 7945.8 30.58 104.11 8.76 9.03 7.45 1960.71 10640 75.50 66.89 4145.83 0.25 2484.9 93.88 2106.73 2806.16 2598.97 948.28 479.65 528060 5.47125 64.019 12.68 833 12217 31.83 47.21 65507.92 76617.7 84210.42 86726.1 16.268 406.706 15.01 76.25 41.841 205.03 24.63 60.336 16614.28 1.27895 56.976 37904.32 148.61 48.98 15026.6 181.4556 33.0595 55.47 617.7405 9.7085 54.426 146.1108 41.0572 617.0345 9.723 108620 204950 21260 102180 631.74 603.33 31820 2709.03 11720 226.59 49.228 20.3104 48.564 47.659 0.73 46.082 124.7582 8.0151 125.1653 7.989 33.3228 30.0041 23.2 34.548 15706.92 32.875 69896.41 74.0117 81.044 113.6428 52.7812 17.326 53.4052 112.2567 16.7678 59.619 23.5672 42.4149 12.7463 78.4088 19.527 0.66 2.9 2.43 9.63 2.62 11.22 9.81 38.5 42.17 34.91 4103283.77 152.46 773.07 2717409.38 261.26 7932169.44 6371966.37 3592.78 8500.17 61203.21 13924963.43 2465656.19 40572.79 22525.14 292.1 8765.88 575708.53 192.23 31954.78 2035501.44 90998.92 215163.99 4.74859 47.62 47.95 46.98 35.257 6917.32 10.094 12.0288 41.134 28582.22 85.9 25.29 24070.66 22.5504 6.076 108.91 111.49 5.505 5.25072 28.33 2564.206 151.76 32544333333 631.33 28.65 110.02 10.12 0.68 0.69 10.33 16.405 410.141 287.33 12.67 199.35 31.55 240.09 31.56 240.03 109.30 359.99903 42.077007 30.50 15.37 164.26 48.62 155.81 48.77 155.31 10.44 130.06 8.84 9.12 130.149 127.37 8175.8 30.48 104.27 8.80 9.07 7.41 2112.41 10577.3 75.10 67.26 4147.65 0.25 2481.61 93.79 2093.70 2840.16 2603.19 961.48 474.53 519433 6.22396 64.373 12.71 550 11978 31.85 48.24 66302.86 76562.05 82789.42 83591.65 16.170 404.247 15.12 76.37 41.426 204.73 24.67 61.812 16361.64 1.74121 55.875 38610.17 148.59 49.500 14856.10 182.2569 32.914 55.49 618.1363 9.6761 54.862 146.6414 40.9084 609.8345 9.8238 105667 205100 21397 102017 628.47 600.87 31143 2730.30 11660 225.89 49.4908 20.2026 48.215 48.45 0.73 46.624 125.0469 7.9966 125.2691 7.9824 33.3404 29.9886 23.13 33.679 16085.48 33.612 68310.66 73.7666 81.3092 114.4037 52.4162 17.209 53.4398 112.1223 16.8258 59.4138 23.596 42.3627 12.7634 78.3062 19.531 0.66 2.86 2.49 10.14 2.63 11.17 9.70 39.26 42.07 34.60 3796353.99 149.79 770.89 2563831.67 260.02 7654922.05 6344621.56 3456.53 8961.2 58814.38 13767394.33 2466594.73 40197.6 22434.69 293.29 4168.54 571674.68 184.77 30793.53 1531955.88 87587.66 205666.12 4.73086 47.60 48.23 45.64 36.688 6686.25 10.152 12.0313 42.575 27788.51 84.93 25.914 23571.26 22.5210 6.019 107.41 110.26 5.497 5.28794 OpenBenchmarking.org
TensorFlow Device: CPU - Batch Size: 512 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 512 - Model: GoogLeNet aa a b 7 14 21 28 35 28.71 28.79 28.33
SMHasher Hash: SHA3-256 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: SHA3-256 a aa b 600 1200 1800 2400 3000 SE +/- 19.10, N = 7 2598.05 2620.22 2564.21 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: SHA3-256 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: SHA3-256 a aa b 30 60 90 120 150 SE +/- 1.41, N = 7 149.08 148.14 151.76 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
nekRS Input: TurboPipe Periodic OpenBenchmarking.org FLOP/s, More Is Better nekRS 22.0 Input: TurboPipe Periodic a aa b 7000M 14000M 21000M 28000M 35000M SE +/- 20784155.29, N = 3 31559900000 32575200000 32544333333 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
OpenRadioss Model: INIVOL and Fluid Structure Interaction Drop Container OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: INIVOL and Fluid Structure Interaction Drop Container a aa b 140 280 420 560 700 SE +/- 0.82, N = 3 633.97 650.25 631.33
TensorFlow Device: CPU - Batch Size: 256 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: GoogLeNet a aa b 7 14 21 28 35 28.64 28.67 28.65
TensorFlow Device: CPU - Batch Size: 512 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 512 - Model: AlexNet a aa b 20 40 60 80 100 SE +/- 0.06, N = 3 110.36 110.49 110.02
TensorFlow Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: ResNet-50 a aa b 3 6 9 12 15 10.09 10.06 10.12
JPEG XL libjxl Input: JPEG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 100 a aa b 0.153 0.306 0.459 0.612 0.765 SE +/- 0.00, N = 3 0.68 0.68 0.68 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: PNG - Quality: 100 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 100 a aa b 0.1553 0.3106 0.4659 0.6212 0.7765 SE +/- 0.01, N = 3 0.69 0.69 0.69 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
TensorFlow Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: ResNet-50 a aa b 3 6 9 12 15 SE +/- 0.01, N = 3 10.29 10.29 10.33
miniBUDE Implementation: OpenMP - Input Deck: BM2 OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM2 a aa b 4 8 12 16 20 SE +/- 0.00, N = 3 16.39 16.41 16.41 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
miniBUDE Implementation: OpenMP - Input Deck: BM2 OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM2 a aa b 90 180 270 360 450 SE +/- 0.03, N = 3 409.86 410.31 410.14 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenRadioss Model: Bird Strike on Windshield OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bird Strike on Windshield a aa b 60 120 180 240 300 SE +/- 0.10, N = 3 285.69 286.84 287.33
FFmpeg Encoder: libx264 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Upload a aa b 3 6 9 12 15 SE +/- 0.06, N = 3 12.64 12.64 12.67 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Upload OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Upload a aa b 40 80 120 160 200 SE +/- 0.94, N = 3 199.73 199.81 199.35 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Platform a aa b 7 14 21 28 35 SE +/- 0.03, N = 3 31.31 31.49 31.55 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Platform OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Platform a aa b 50 100 150 200 250 SE +/- 0.20, N = 3 241.93 240.52 240.09 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Video On Demand a aa b 7 14 21 28 35 SE +/- 0.07, N = 3 31.45 31.62 31.56 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Video On Demand OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Video On Demand a aa b 50 100 150 200 250 SE +/- 0.53, N = 3 240.84 239.59 240.03 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
TensorFlow Device: CPU - Batch Size: 256 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 256 - Model: AlexNet a aa b 20 40 60 80 100 SE +/- 0.06, N = 3 109.23 109.38 109.30
OpenFOAM Input: drivaerFastback, Small Mesh Size - Execution Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time a aa b 80 160 240 320 400 363.22 361.48 360.00 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenFOAM Input: drivaerFastback, Small Mesh Size - Mesh Time OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time a aa b 10 20 30 40 50 41.82 41.85 42.08 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
TensorFlow Device: CPU - Batch Size: 64 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: GoogLeNet a aa b 7 14 21 28 35 SE +/- 0.01, N = 3 30.55 30.51 30.50
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Upload a aa b 4 8 12 16 20 SE +/- 0.03, N = 3 15.32 15.33 15.37 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Upload OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Upload a aa b 40 80 120 160 200 SE +/- 0.31, N = 3 164.80 164.72 164.26 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Platform OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Platform a aa b 11 22 33 44 55 SE +/- 0.08, N = 3 48.84 49.01 48.62 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Platform OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Platform a aa b 30 60 90 120 150 SE +/- 0.25, N = 3 155.11 154.57 155.81 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Video On Demand OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Video On Demand a aa b 11 22 33 44 55 SE +/- 0.09, N = 3 48.83 48.85 48.77 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Video On Demand OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Video On Demand a aa b 30 60 90 120 150 SE +/- 0.30, N = 3 155.14 155.06 155.31 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
TensorFlow Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: ResNet-50 a aa b 3 6 9 12 15 SE +/- 0.02, N = 3 10.39 10.44 10.44
OpenRadioss Model: Rubber O-Ring Seal Installation OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Rubber O-Ring Seal Installation a aa b 30 60 90 120 150 SE +/- 0.35, N = 3 131.00 130.82 130.06
JPEG XL libjxl Input: JPEG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 80 a aa b 3 6 9 12 15 SE +/- 0.03, N = 3 8.68 8.98 8.84 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: PNG - Quality: 80 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 80 a aa b 3 6 9 12 15 SE +/- 0.02, N = 3 9.08 9.13 9.12 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 0 a aa b 30 60 90 120 150 SE +/- 0.52, N = 3 130.36 131.57 130.15 1. (CXX) g++ options: -O3 -fPIC -lm
OpenRadioss Model: Bumper Beam OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam a aa b 30 60 90 120 150 SE +/- 0.16, N = 3 127.42 126.51 127.37
Xmrig Variant: Monero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Monero - Hash Count: 1M a aa b 2K 4K 6K 8K 10K SE +/- 88.16, N = 3 7982.6 7945.8 8175.8 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
TensorFlow Device: CPU - Batch Size: 32 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: GoogLeNet a aa b 7 14 21 28 35 SE +/- 0.01, N = 3 30.46 30.58 30.48
OpenRadioss Model: Cell Phone Drop Test OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Cell Phone Drop Test a aa b 20 40 60 80 100 SE +/- 0.05, N = 3 104.51 104.11 104.27
JPEG XL libjxl Input: JPEG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 90 a aa b 2 4 6 8 10 SE +/- 0.08, N = 3 8.62 8.76 8.80 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL libjxl Input: PNG - Quality: 90 OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 90 a aa b 3 6 9 12 15 SE +/- 0.02, N = 3 9.02 9.03 9.07 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
AOM AV1 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K a aa b 2 4 6 8 10 SE +/- 0.01, N = 3 7.41 7.45 7.41 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Cpuminer-Opt Algorithm: Garlicoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Garlicoin a aa b 500 1000 1500 2000 2500 SE +/- 24.17, N = 15 1939.65 1960.71 2112.41 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Xmrig Variant: Wownero - Hash Count: 1M OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Wownero - Hash Count: 1M a aa b 2K 4K 6K 8K 10K SE +/- 4.48, N = 3 10466.6 10640.0 10577.3 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Live a aa b 20 40 60 80 100 SE +/- 0.73, N = 3 75.38 75.50 75.10 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx265 - Scenario: Live OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Live a aa b 15 30 45 60 75 SE +/- 0.65, N = 3 66.99 66.89 67.26 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a aa b 900 1800 2700 3600 4500 SE +/- 9.23, N = 3 4075.57 4145.83 4147.65 MIN: 4066.65 MIN: 4135.73 MIN: 4122.17 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
AOM AV1 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K a aa b 0.0563 0.1126 0.1689 0.2252 0.2815 SE +/- 0.00, N = 3 0.25 0.25 0.25 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a aa b 500 1000 1500 2000 2500 SE +/- 1.05, N = 3 2503.12 2484.90 2481.61 MIN: 2492.43 MIN: 2478.59 MIN: 2470.87 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
TensorFlow Device: CPU - Batch Size: 64 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: AlexNet a aa b 20 40 60 80 100 SE +/- 0.06, N = 3 93.80 93.88 93.79
Libplacebo Test: av1_grain_lap OpenBenchmarking.org FPS, More Is Better Libplacebo 5.229.1 Test: av1_grain_lap a aa b 500 1000 1500 2000 2500 SE +/- 8.76, N = 3 2073.69 2106.73 2093.70 1. (CXX) g++ options: -lm -pthread -lglslang -lMachineIndependent -lOSDependent -lHLSL -lOGLCompiler -lGenericCodeGen -lSPVRemapper -lSPIRV -lSPIRV-Tools-opt -lSPIRV-Tools -lpthread -ldl -std=c++11 -O2 -fvisibility=hidden -fPIC -MD -MQ -MF
Libplacebo Test: hdr_lut OpenBenchmarking.org FPS, More Is Better Libplacebo 5.229.1 Test: hdr_lut a aa b 600 1200 1800 2400 3000 SE +/- 46.08, N = 3 2797.23 2806.16 2840.16 1. (CXX) g++ options: -lm -pthread -lglslang -lMachineIndependent -lOSDependent -lHLSL -lOGLCompiler -lGenericCodeGen -lSPVRemapper -lSPIRV -lSPIRV-Tools-opt -lSPIRV-Tools -lpthread -ldl -std=c++11 -O2 -fvisibility=hidden -fPIC -MD -MQ -MF
Libplacebo Test: hdr_peakdetect OpenBenchmarking.org FPS, More Is Better Libplacebo 5.229.1 Test: hdr_peakdetect a aa b 600 1200 1800 2400 3000 SE +/- 4.71, N = 3 2610.20 2598.97 2603.19 1. (CXX) g++ options: -lm -pthread -lglslang -lMachineIndependent -lOSDependent -lHLSL -lOGLCompiler -lGenericCodeGen -lSPVRemapper -lSPIRV -lSPIRV-Tools-opt -lSPIRV-Tools -lpthread -ldl -std=c++11 -O2 -fvisibility=hidden -fPIC -MD -MQ -MF
Libplacebo Test: polar_nocompute OpenBenchmarking.org FPS, More Is Better Libplacebo 5.229.1 Test: polar_nocompute a aa b 200 400 600 800 1000 SE +/- 3.01, N = 3 977.72 948.28 961.48 1. (CXX) g++ options: -lm -pthread -lglslang -lMachineIndependent -lOSDependent -lHLSL -lOGLCompiler -lGenericCodeGen -lSPVRemapper -lSPIRV -lSPIRV-Tools-opt -lSPIRV-Tools -lpthread -ldl -std=c++11 -O2 -fvisibility=hidden -fPIC -MD -MQ -MF
Libplacebo Test: deband_heavy OpenBenchmarking.org FPS, More Is Better Libplacebo 5.229.1 Test: deband_heavy a aa b 100 200 300 400 500 SE +/- 3.08, N = 3 476.59 479.65 474.53 1. (CXX) g++ options: -lm -pthread -lglslang -lMachineIndependent -lOSDependent -lHLSL -lOGLCompiler -lGenericCodeGen -lSPVRemapper -lSPIRV -lSPIRV-Tools-opt -lSPIRV-Tools -lpthread -ldl -std=c++11 -O2 -fvisibility=hidden -fPIC -MD -MQ -MF
Cpuminer-Opt Algorithm: Blake-2 S OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Blake-2 S a aa b 110K 220K 330K 440K 550K SE +/- 4099.93, N = 10 524680 528060 519433 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a aa b 2 4 6 8 10 SE +/- 0.14562, N = 15 8.22740 5.47125 6.22396 MIN: 7 MIN: 4.57 MIN: 4.61 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 2 a aa b 14 28 42 56 70 SE +/- 0.20, N = 3 63.12 64.02 64.37 1. (CXX) g++ options: -O3 -fPIC -lm
AOM AV1 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K a aa b 3 6 9 12 15 SE +/- 0.06, N = 3 12.67 12.68 12.71 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
spaCy Model: en_core_web_trf OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_trf aa a b 200 400 600 800 1000 833 832 550
spaCy Model: en_core_web_lg OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_lg aa a b 3K 6K 9K 12K 15K 12217 12196 11978
TensorFlow Device: CPU - Batch Size: 16 - Model: GoogLeNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: GoogLeNet a aa b 7 14 21 28 35 SE +/- 0.03, N = 3 31.89 31.83 31.85
JPEG XL Decoding libjxl CPU Threads: 1 OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: 1 a aa b 11 22 33 44 55 SE +/- 0.56, N = 3 48.69 47.21 48.24
nginx Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 1000 aa a b 14K 28K 42K 56K 70K 65507.92 65135.63 66302.86 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
nginx Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 500 aa a b 16K 32K 48K 64K 80K 76617.70 75259.81 76562.05 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
nginx Connections: 200 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 200 aa a b 20K 40K 60K 80K 100K 84210.42 83513.14 82789.42 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
nginx Connections: 100 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 100 aa a b 20K 40K 60K 80K 100K 86726.10 87222.68 83591.65 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
miniBUDE Implementation: OpenMP - Input Deck: BM1 OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 a aa b 4 8 12 16 20 SE +/- 0.04, N = 3 16.23 16.27 16.17 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
miniBUDE Implementation: OpenMP - Input Deck: BM1 OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 a aa b 90 180 270 360 450 SE +/- 0.95, N = 3 405.82 406.71 404.25 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
AOM AV1 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p a aa b 4 8 12 16 20 SE +/- 0.08, N = 3 15.08 15.01 15.12 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
TensorFlow Device: CPU - Batch Size: 32 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 32 - Model: AlexNet a aa b 20 40 60 80 100 SE +/- 0.07, N = 3 76.22 76.25 76.37
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.7.10.9513 Pi Digits To Calculate: 1B a aa b 10 20 30 40 50 SE +/- 0.11, N = 3 41.52 41.84 41.43
FFmpeg Encoder: libx264 - Scenario: Live OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Live a aa b 40 80 120 160 200 SE +/- 0.39, N = 3 204.75 205.03 204.73 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
FFmpeg Encoder: libx264 - Scenario: Live OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Live a aa b 6 12 18 24 30 SE +/- 0.05, N = 3 24.66 24.63 24.67 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
SMHasher Hash: FarmHash128 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: FarmHash128 a aa b 14 28 42 56 70 SE +/- 0.72, N = 15 64.10 60.34 61.81 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: FarmHash128 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: FarmHash128 a aa b 4K 8K 12K 16K 20K SE +/- 128.88, N = 15 15884.24 16614.28 16361.64 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU a aa b 0.4234 0.8468 1.2702 1.6936 2.117 SE +/- 0.12895, N = 15 1.88170 1.27895 1.74121 MIN: 1.76 MIN: 1.15 MIN: 1.02 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
SMHasher Hash: MeowHash x86_64 AES-NI OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: MeowHash x86_64 AES-NI a aa b 13 26 39 52 65 SE +/- 0.39, N = 15 57.74 56.98 55.88 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: MeowHash x86_64 AES-NI OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: MeowHash x86_64 AES-NI a aa b 8K 16K 24K 32K 40K SE +/- 285.07, N = 15 37183.28 37904.32 38610.17 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
JPEG XL Decoding libjxl CPU Threads: All OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: All a aa b 30 60 90 120 150 SE +/- 0.27, N = 3 146.63 148.61 148.59
SMHasher Hash: Spooky32 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: Spooky32 a aa b 11 22 33 44 55 SE +/- 0.31, N = 15 49.06 48.98 49.50 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: Spooky32 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: Spooky32 a aa b 3K 6K 9K 12K 15K SE +/- 98.93, N = 15 14909.02 15026.60 14856.10 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream aa a b 40 80 120 160 200 181.46 180.61 182.26
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream aa a b 8 16 24 32 40 33.06 33.21 32.91
TensorFlow Device: CPU - Batch Size: 16 - Model: AlexNet OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: AlexNet a aa b 12 24 36 48 60 SE +/- 0.04, N = 3 55.63 55.47 55.49
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream aa a b 130 260 390 520 650 617.74 611.84 618.14
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream aa a b 3 6 9 12 15 9.7085 9.7748 9.6761
EnCodec Target Bandwidth: 24 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 24 kbps aa a b 12 24 36 48 60 54.43 54.42 54.86
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream aa a b 30 60 90 120 150 146.11 146.48 146.64
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream aa a b 9 18 27 36 45 41.06 40.95 40.91
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream aa a b 130 260 390 520 650 617.03 617.65 609.83
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream aa a b 3 6 9 12 15 9.7230 9.7133 9.8238
Cpuminer-Opt Algorithm: Skeincoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Skeincoin a aa b 20K 40K 60K 80K 100K SE +/- 1171.67, N = 3 106880 108620 105667 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Triple SHA-256, Onecoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Triple SHA-256, Onecoin a aa b 40K 80K 120K 160K 200K SE +/- 26.46, N = 3 204970 204950 205100 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Myriad-Groestl OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Myriad-Groestl a aa b 5K 10K 15K 20K 25K SE +/- 153.44, N = 3 21130 21260 21397 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Quad SHA-256, Pyrite OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Quad SHA-256, Pyrite a aa b 20K 40K 60K 80K 100K SE +/- 58.12, N = 3 102170 102180 102017 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: x25x OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: x25x a aa b 140 280 420 560 700 SE +/- 0.96, N = 3 631.15 631.74 628.47 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Magi OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Magi a aa b 130 260 390 520 650 SE +/- 0.50, N = 3 603.58 603.33 600.87 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: LBC, LBRY Credits OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: LBC, LBRY Credits a aa b 7K 14K 21K 28K 35K SE +/- 18.56, N = 3 31260 31820 31143 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Ringcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Ringcoin a aa b 600 1200 1800 2400 3000 SE +/- 7.56, N = 3 2705.27 2709.03 2730.30 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: Deepcoin OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Deepcoin a aa b 3K 6K 9K 12K 15K SE +/- 5.77, N = 3 11710 11720 11660 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Cpuminer-Opt Algorithm: scrypt OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: scrypt a aa b 50 100 150 200 250 SE +/- 0.18, N = 3 226.42 226.59 225.89 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream aa a b 11 22 33 44 55 49.23 48.49 49.49
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream aa a b 5 10 15 20 25 20.31 20.62 20.20
EnCodec Target Bandwidth: 6 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 6 kbps aa a b 11 22 33 44 55 48.56 47.33 48.22
EnCodec Target Bandwidth: 3 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 3 kbps aa a b 11 22 33 44 55 47.66 47.23 48.45
AOM AV1 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p a aa b 0.1643 0.3286 0.4929 0.6572 0.8215 SE +/- 0.00, N = 3 0.73 0.73 0.73 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
EnCodec Target Bandwidth: 1.5 kbps OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 1.5 kbps aa a b 11 22 33 44 55 46.08 45.14 46.62
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream aa a b 30 60 90 120 150 124.76 124.79 125.05
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream aa a b 2 4 6 8 10 8.0151 8.0129 7.9966
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream aa a b 30 60 90 120 150 125.17 125.35 125.27
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream aa a b 2 4 6 8 10 7.9890 7.9776 7.9824
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream aa a b 8 16 24 32 40 33.32 33.39 33.34
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream aa a b 7 14 21 28 35 30.00 29.95 29.99
AOM AV1 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K a aa b 6 12 18 24 30 SE +/- 0.06, N = 3 23.10 23.20 23.13 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
SMHasher Hash: t1ha2_atonce OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: t1ha2_atonce a aa b 8 16 24 32 40 SE +/- 0.28, N = 15 32.25 34.55 33.68 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: t1ha2_atonce OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: t1ha2_atonce a aa b 4K 8K 12K 16K 20K SE +/- 116.42, N = 15 16686.57 15706.92 16085.48 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: t1ha0_aes_avx2 x86_64 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: t1ha0_aes_avx2 x86_64 a aa b 8 16 24 32 40 SE +/- 0.30, N = 15 34.81 32.88 33.61 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: t1ha0_aes_avx2 x86_64 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: t1ha0_aes_avx2 x86_64 a aa b 15K 30K 45K 60K 75K SE +/- 549.14, N = 15 66796.46 69896.41 68310.66 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream aa a b 16 32 48 64 80 74.01 73.71 73.77
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream aa a b 20 40 60 80 100 81.04 81.37 81.31
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream aa a b 30 60 90 120 150 113.64 113.46 114.40
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream aa a b 12 24 36 48 60 52.78 52.85 52.42
FLAC Audio Encoding WAV To FLAC OpenBenchmarking.org Seconds, Fewer Is Better FLAC Audio Encoding 1.4 WAV To FLAC a aa b 4 8 12 16 20 SE +/- 0.04, N = 5 17.26 17.33 17.21 1. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream aa a b 12 24 36 48 60 53.41 53.41 53.44
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream aa a b 30 60 90 120 150 112.26 112.28 112.12
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream aa a b 4 8 12 16 20 16.77 16.82 16.83
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream aa a b 13 26 39 52 65 59.62 59.45 59.41
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream aa a b 6 12 18 24 30 23.57 23.56 23.60
Neural Magic DeepSparse Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream aa a b 10 20 30 40 50 42.41 42.43 42.36
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream aa a b 3 6 9 12 15 12.75 12.75 12.76
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.1 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream aa a b 20 40 60 80 100 78.41 78.39 78.31
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.7.10.9513 Pi Digits To Calculate: 500M a aa b 5 10 15 20 25 SE +/- 0.02, N = 3 19.55 19.53 19.53
QuadRay Scene: 5 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 4K a aa b 0.1485 0.297 0.4455 0.594 0.7425 SE +/- 0.00, N = 3 0.66 0.66 0.66 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 2 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 4K a aa b 0.6525 1.305 1.9575 2.61 3.2625 SE +/- 0.01, N = 3 2.89 2.90 2.86 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 3 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 4K a aa b 0.5603 1.1206 1.6809 2.2412 2.8015 SE +/- 0.01, N = 3 2.46 2.43 2.49 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 1 - Resolution: 4K OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 4K a aa b 3 6 9 12 15 SE +/- 0.04, N = 3 10.21 9.63 10.14 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 5 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 1080p a aa b 0.594 1.188 1.782 2.376 2.97 SE +/- 0.00, N = 3 2.64 2.62 2.63 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 2 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 1080p a aa b 3 6 9 12 15 SE +/- 0.06, N = 3 11.12 11.22 11.17 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 3 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 1080p a aa b 3 6 9 12 15 SE +/- 0.10, N = 3 9.71 9.81 9.70 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
QuadRay Scene: 1 - Resolution: 1080p OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 1080p a aa b 9 18 27 36 45 SE +/- 0.13, N = 3 39.48 38.50 39.26 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
AOM AV1 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p a aa b 10 20 30 40 50 SE +/- 0.26, N = 3 41.57 42.17 42.07 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
AOM AV1 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K a aa b 8 16 24 32 40 SE +/- 0.08, N = 3 34.46 34.91 34.60 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Stress-NG Test: Context Switching OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Context Switching aa a b 900K 1800K 2700K 3600K 4500K 4103283.77 4076744.52 3796353.99 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: CPU Cache OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Cache aa a b 40 80 120 160 200 152.46 158.89 149.79 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MEMFD aa a b 170 340 510 680 850 773.07 747.31 770.89 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Futex aa a b 600K 1200K 1800K 2400K 3000K 2717409.38 2600010.05 2563831.67 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: NUMA OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: NUMA aa a b 60 120 180 240 300 261.26 260.68 260.02 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: System V Message Passing OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: System V Message Passing aa a b 2M 4M 6M 8M 10M 7932169.44 7925418.15 7654922.05 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Mutex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Mutex aa a b 1.4M 2.8M 4.2M 5.6M 7M 6371966.37 6319425.86 6344621.56 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Memory Copying OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Memory Copying aa a b 800 1600 2400 3200 4000 3592.78 3599.42 3456.53 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Socket Activity OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Socket Activity aa a b 2K 4K 6K 8K 10K 8500.17 8595.20 8961.20 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Matrix Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Matrix Math aa a b 13K 26K 39K 52K 65K 61203.21 61078.06 58814.38 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Malloc OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Malloc aa a b 3M 6M 9M 12M 15M 13924963.43 13779284.62 13767394.33 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Semaphores OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Semaphores aa a b 500K 1000K 1500K 2000K 2500K 2465656.19 2463565.68 2466594.73 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Forking OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Forking aa a b 9K 18K 27K 36K 45K 40572.79 40818.92 40197.60 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Crypto OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Crypto aa a b 5K 10K 15K 20K 25K 22525.14 22523.79 22434.69 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: MMAP OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MMAP aa a b 60 120 180 240 300 292.10 294.05 293.29 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: IO_uring OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: IO_uring aa a b 2K 4K 6K 8K 10K 8765.88 5126.99 4168.54 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Atomic OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Atomic aa a b 120K 240K 360K 480K 600K 575708.53 575564.96 571674.68 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Glibc Qsort Data Sorting OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Glibc Qsort Data Sorting aa a b 40 80 120 160 200 192.23 180.77 184.77 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: CPU Stress OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Stress aa a b 7K 14K 21K 28K 35K 31954.78 32900.39 30793.53 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Glibc C String Functions OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Glibc C String Functions aa a b 400K 800K 1200K 1600K 2000K 2035501.44 2048965.25 1531955.88 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Vector Math aa a b 20K 40K 60K 80K 100K 90998.92 90999.65 87587.66 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
Stress-NG Test: SENDFILE OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: SENDFILE aa a b 50K 100K 150K 200K 250K 215163.99 215308.14 205666.12 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a aa b 1.0684 2.1368 3.2052 4.2736 5.342 SE +/- 0.01435, N = 3 4.69026 4.74859 4.73086 MIN: 4.49 MIN: 4.55 MIN: 4.49 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
AOM AV1 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K a aa b 11 22 33 44 55 SE +/- 0.12, N = 3 47.42 47.62 47.60 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
AOM AV1 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K a aa b 11 22 33 44 55 SE +/- 0.11, N = 3 47.55 47.95 48.23 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
AOM AV1 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p a aa b 11 22 33 44 55 SE +/- 0.20, N = 3 45.20 46.98 45.64 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
SMHasher Hash: fasthash32 OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: fasthash32 a aa b 8 16 24 32 40 SE +/- 0.48, N = 4 36.90 35.26 36.69 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: fasthash32 OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: fasthash32 a aa b 1500 3000 4500 6000 7500 SE +/- 77.96, N = 4 6658.92 6917.32 6686.25 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6, Lossless a aa b 3 6 9 12 15 SE +/- 0.04, N = 3 10.15 10.09 10.15 1. (CXX) g++ options: -O3 -fPIC -lm
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a aa b 3 6 9 12 15 SE +/- 0.01, N = 3 12.06 12.03 12.03 MIN: 11.96 MIN: 11.93 MIN: 11.91 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
SMHasher Hash: FarmHash32 x86_64 AVX OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: FarmHash32 x86_64 AVX a aa b 10 20 30 40 50 SE +/- 0.70, N = 3 40.07 41.13 42.58 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: FarmHash32 x86_64 AVX OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: FarmHash32 x86_64 AVX a aa b 6K 12K 18K 24K 30K SE +/- 369.00, N = 3 29093.03 28582.22 27788.51 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
AOM AV1 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p a aa b 20 40 60 80 100 SE +/- 0.06, N = 3 85.22 85.90 84.93 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
SMHasher Hash: wyhash OpenBenchmarking.org cycles/hash, Fewer Is Better SMHasher 2022-08-22 Hash: wyhash a aa b 6 12 18 24 30 SE +/- 0.37, N = 3 25.31 25.29 25.91 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
SMHasher Hash: wyhash OpenBenchmarking.org MiB/sec, More Is Better SMHasher 2022-08-22 Hash: wyhash a aa b 5K 10K 15K 20K 25K SE +/- 338.95, N = 3 23915.75 24070.66 23571.26 1. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a aa b 5 10 15 20 25 SE +/- 0.02, N = 3 22.54 22.55 22.52 MIN: 22.24 MIN: 22.11 MIN: 21.81 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 6 a aa b 2 4 6 8 10 SE +/- 0.028, N = 3 6.049 6.076 6.019 1. (CXX) g++ options: -O3 -fPIC -lm
AOM AV1 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p a aa b 20 40 60 80 100 SE +/- 0.48, N = 3 108.47 108.91 107.41 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
AOM AV1 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p a aa b 20 40 60 80 100 SE +/- 0.74, N = 3 109.52 111.49 110.26 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.11 Encoder Speed: 10, Lossless a aa b 1.2386 2.4772 3.7158 4.9544 6.193 SE +/- 0.026, N = 3 5.480 5.505 5.497 1. (CXX) g++ options: -O3 -fPIC -lm
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a aa b 1.1898 2.3796 3.5694 4.7592 5.949 SE +/- 0.00411, N = 3 5.27435 5.25072 5.28794 MIN: 5.18 MIN: 5.17 MIN: 5.2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Phoronix Test Suite v10.8.5