2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.
r1 Processor: 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads), Motherboard: Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS), Chipset: Intel Device 0998, Memory: 16 x 32 GB DDR4-3200MT/s Hynix HMA84GR7CJR4N-XN, Disk: 2 x 7682GB INTEL SSDPF2KX076TZ + 2 x 800GB INTEL SSDPF21Q800GB + 3841GB Micron_9300_MTFDHAL3T8TDP + 960GB INTEL SSDSC2KG96, Graphics: ASPEED, Monitor: VE228, Network: 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP
OS: Ubuntu 20.04, Kernel: 5.11.0-051100-generic (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
r1a r2 r2a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
r2b r3 r4 r5 Processor: 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads), Motherboard: Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS), Chipset: Intel Device 0998, Memory: 16 x 32 GB DDR4-3200MT/s Hynix HMA84GR7CJR4N-XN, Disk: 2 x 7682GB INTEL SSDPF2KX076TZ + 2 x 800GB INTEL SSDPF21Q800GB + 3841GB Micron_9300_MTFDHAL3T8TDP + 960GB INTEL SSDSC2KG96, Graphics: ASPEED, Network: 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP
OS: Ubuntu 20.04, Kernel: 5.11.0-051100-generic (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1024x768
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 128 - Warehouses: 250 r1 12K 24K 36K 48K 60K SE +/- 857.30, N = 9 55415 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 128 - Warehouses: 500 r1 r1a 40K 80K 120K 160K 200K SE +/- 2691.06, N = 9 SE +/- 1389.03, N = 9 173288 173228 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 128 - Warehouses: 500 r1a r1 12K 24K 36K 48K 60K SE +/- 484.29, N = 9 SE +/- 891.59, N = 9 57242 57190 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 64 - Warehouses: 250 r1 40K 80K 120K 160K 200K SE +/- 2831.11, N = 9 191397 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 64 - Warehouses: 250 r1 14K 28K 42K 56K 70K SE +/- 937.55, N = 9 63279 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 32 - Warehouses: 250 r1 40K 80K 120K 160K 200K SE +/- 3390.81, N = 9 209254 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 32 - Warehouses: 250 r1 15K 30K 45K 60K 75K SE +/- 1078.76, N = 9 69054 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 16 - Warehouses: 500 r1 40K 80K 120K 160K 200K SE +/- 3159.46, N = 9 195258 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 16 - Warehouses: 500 r1 14K 28K 42K 56K 70K SE +/- 1031.07, N = 9 64477 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 32 - Warehouses: 500 r1 40K 80K 120K 160K 200K SE +/- 2885.40, N = 9 208419 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 32 - Warehouses: 500 r1 15K 30K 45K 60K 75K SE +/- 921.11, N = 9 68818 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 10.5.2 Clients: 512 r2b 40 80 120 160 200 SE +/- 0.87, N = 3 166 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 10.5.2 Clients: 128 r2b r3 40 80 120 160 200 SE +/- 0.65, N = 3 SE +/- 0.35, N = 3 192 189 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt
HammerDB - MariaDB This is a MariaDB MySQL database server benchmark making use of the HammerDB benchmarking / load testing tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 64 - Warehouses: 500 r1 r1a 40K 80K 120K 160K 200K SE +/- 2149.33, N = 3 SE +/- 2084.32, N = 9 194684 188761 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 64 - Warehouses: 500 r1 r1a 14K 28K 42K 56K 70K SE +/- 620.04, N = 3 SE +/- 730.55, N = 9 64298 62311 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: X3D-benchmarking input.i3d r2b r1a r1 r3 r4 80 160 240 320 400 SE +/- 2.73, N = 9 SE +/- 0.12, N = 3 SE +/- 0.46, N = 3 SE +/- 4.39, N = 9 SE +/- 3.91, N = 9 307.62 311.96 313.92 386.39 389.70 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
GNU Radio GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: Hilbert Transform r1 r1a r3 r4 r2b 100 200 300 400 500 SE +/- 2.02, N = 3 SE +/- 1.66, N = 3 SE +/- 17.46, N = 9 SE +/- 24.71, N = 9 SE +/- 47.90, N = 3 459.3 459.1 408.0 373.8 357.4 1. 3.8.1.0
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: FM Deemphasis Filter r1 r1a r2b r4 r3 160 320 480 640 800 SE +/- 1.94, N = 3 SE +/- 1.04, N = 3 SE +/- 53.33, N = 3 SE +/- 32.02, N = 9 SE +/- 31.57, N = 9 734.0 727.4 645.8 622.0 621.0 1. 3.8.1.0
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: IIR Filter r1 r1a r2b r4 r3 130 260 390 520 650 SE +/- 0.38, N = 3 SE +/- 0.46, N = 3 SE +/- 45.07, N = 3 SE +/- 25.67, N = 9 SE +/- 26.49, N = 9 610.6 609.5 498.2 487.7 487.4 1. 3.8.1.0
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: FIR Filter r1a r1 r4 r3 r2b 130 260 390 520 650 SE +/- 0.20, N = 3 SE +/- 1.45, N = 3 SE +/- 11.25, N = 9 SE +/- 16.19, N = 9 SE +/- 44.41, N = 3 604.8 603.0 515.6 502.0 470.0 1. 3.8.1.0
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: Signal Source (Cosine) r1 r1a r3 r2b r4 500 1000 1500 2000 2500 SE +/- 0.93, N = 3 SE +/- 2.24, N = 3 SE +/- 72.44, N = 9 SE +/- 168.17, N = 3 SE +/- 82.03, N = 9 2183.5 2175.3 1723.9 1684.4 1619.2 1. 3.8.1.0
OpenBenchmarking.org MiB/s, More Is Better GNU Radio Test: Five Back to Back FIR Filters r1 r1a r3 r4 r2b 200 400 600 800 1000 SE +/- 2.54, N = 3 SE +/- 2.30, N = 3 SE +/- 39.63, N = 9 SE +/- 48.36, N = 9 SE +/- 1.12, N = 3 1024.3 1015.2 580.5 487.9 111.2 1. 3.8.1.0
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K r1a r4 r3 r2b 0.9383 1.8766 2.8149 3.7532 4.6915 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 9 SE +/- 0.03, N = 3 4.17 2.10 2.05 2.01 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 16 - Warehouses: 250 r1 14K 28K 42K 56K 70K SE +/- 880.35, N = 3 63757 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
LuaRadio LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better LuaRadio 0.9.1 Test: Complex Phase r1a r1 r2b r3 r4 120 240 360 480 600 SE +/- 0.71, N = 3 SE +/- 0.25, N = 3 SE +/- 3.61, N = 9 SE +/- 4.31, N = 6 SE +/- 4.50, N = 6 548.2 546.8 458.7 458.2 452.7
OpenBenchmarking.org MiB/s, More Is Better LuaRadio 0.9.1 Test: Hilbert Transform r1a r1 r4 r3 r2b 20 40 60 80 100 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.61, N = 6 SE +/- 0.47, N = 6 SE +/- 0.41, N = 9 80.3 80.3 78.4 78.2 78.2
OpenBenchmarking.org MiB/s, More Is Better LuaRadio 0.9.1 Test: FM Deemphasis Filter r1 r1a r3 r2b r4 90 180 270 360 450 SE +/- 0.21, N = 3 SE +/- 1.40, N = 3 SE +/- 4.83, N = 6 SE +/- 5.30, N = 9 SE +/- 1.19, N = 6 410.0 409.6 370.3 370.1 368.0
OpenBenchmarking.org MiB/s, More Is Better LuaRadio 0.9.1 Test: Five Back to Back FIR Filters r1 r1a r2b r4 r3 200 400 600 800 1000 SE +/- 2.24, N = 3 SE +/- 0.62, N = 3 SE +/- 22.87, N = 9 SE +/- 73.21, N = 6 SE +/- 74.31, N = 6 1094.8 1094.5 804.5 706.1 662.8
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 8 - Warehouses: 250 r1 20K 40K 60K 80K 100K SE +/- 675.05, N = 3 95768 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org Transactions Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 8 - Warehouses: 500 r1 60K 120K 180K 240K 300K SE +/- 2338.98, N = 3 285984 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.org New Orders Per Minute, More Is Better HammerDB - MariaDB 10.5.9 Virtual Users: 8 - Warehouses: 500 r1 20K 40K 60K 80K 100K SE +/- 693.36, N = 3 94379 1. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K r1a r1 r4 r2b r3 2 4 6 8 10 SE +/- 0.06, N = 3 SE +/- 0.09, N = 15 SE +/- 0.03, N = 5 SE +/- 0.03, N = 9 SE +/- 0.04, N = 3 7.55 7.37 3.23 3.22 3.20 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: inception-v3 r4 r2b 12 24 36 48 60 SE +/- 0.75, N = 12 SE +/- 1.54, N = 3 52.23 53.07 MIN: 47.47 / MAX: 94.69 MIN: 49.59 / MAX: 69.62 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: mobilenet-v1-1.0 r2b r4 0.7565 1.513 2.2695 3.026 3.7825 SE +/- 0.089, N = 3 SE +/- 0.021, N = 12 3.213 3.362 MIN: 2.8 / MAX: 6.7 MIN: 2.98 / MAX: 6.66 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: MobileNetV2_224 r2b r4 0.9225 1.845 2.7675 3.69 4.6125 SE +/- 0.333, N = 3 SE +/- 0.135, N = 12 4.078 4.100 MIN: 2.9 / MAX: 13.17 MIN: 2.97 / MAX: 12.98 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: resnet-v2-50 r4 r2b 11 22 33 44 55 SE +/- 1.07, N = 12 SE +/- 2.59, N = 3 48.04 48.73 MIN: 42.13 / MAX: 145.2 MIN: 43.19 / MAX: 69.59 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: SqueezeNetV1.0 r4 r2b 2 4 6 8 10 SE +/- 0.078, N = 12 SE +/- 0.002, N = 3 7.170 7.174 MIN: 6.38 / MAX: 9.97 MIN: 6.95 / MAX: 7.88 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
SecureMark SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org marks, More Is Better SecureMark 1.0.4 Benchmark: SecureMark-TLS r1 r1a r2b r3 r4 50K 100K 150K 200K 250K SE +/- 234.37, N = 3 SE +/- 236.12, N = 3 SE +/- 84.15, N = 3 SE +/- 267.95, N = 3 SE +/- 2769.20, N = 3 225412 225366 225343 225291 222747 1. (CC) gcc options: -pedantic -O3
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K r1a r3 r4 r2b 0.0428 0.0856 0.1284 0.1712 0.214 SE +/- 0.00, N = 5 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 12 0.19 0.15 0.14 0.14 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p r1a r4 r3 r2b 2 4 6 8 10 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.04, N = 5 SE +/- 0.03, N = 3 6.89 3.36 3.36 3.30 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
LuxCoreRender LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.5 Scene: Orange Juice - Acceleration: CPU r1 r2b r1a r4 r3 4 8 12 16 20 SE +/- 0.13, N = 3 SE +/- 0.18, N = 3 SE +/- 0.21, N = 3 SE +/- 0.13, N = 15 SE +/- 0.12, N = 15 14.36 14.28 14.26 13.94 13.89 MIN: 11.58 / MAX: 19.44 MIN: 11.93 / MAX: 17.73 MIN: 11.6 / MAX: 19.3 MIN: 11.06 / MAX: 17.84 MIN: 11.08 / MAX: 17.77
LuxCoreRender LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.5 Scene: DLSC - Acceleration: CPU r1 r1a r2b r4 r3 3 6 9 12 15 SE +/- 0.09, N = 3 SE +/- 0.09, N = 15 SE +/- 0.08, N = 15 SE +/- 0.09, N = 3 SE +/- 0.10, N = 3 9.70 9.61 9.27 9.25 9.24 MIN: 8.98 / MAX: 12.22 MIN: 8 / MAX: 12.27 MIN: 8.31 / MAX: 11.98 MIN: 8.59 / MAX: 11.4 MIN: 8.74 / MAX: 11.37
Intel Memory Latency Checker Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Max Bandwidth - Stream-Triad Like r1 r2b r4 r5 r2a r3 r1a 70K 140K 210K 280K 350K SE +/- 25.05, N = 3 SE +/- 50.20, N = 3 SE +/- 7.71, N = 3 SE +/- 22.58, N = 3 SE +/- 53.08, N = 3 SE +/- 50.80, N = 3 SE +/- 11.61, N = 3 325766.94 325409.99 325314.62 325312.30 325260.41 325218.50 325184.58
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Max Bandwidth - 1:1 Reads-Writes r2a r2b r1a r3 r4 r5 r1 90K 180K 270K 360K 450K SE +/- 1844.14, N = 3 SE +/- 3117.58, N = 3 SE +/- 1093.30, N = 3 SE +/- 276.68, N = 3 SE +/- 2322.32, N = 3 SE +/- 1051.98, N = 3 SE +/- 821.19, N = 3 442460.05 441732.77 441408.09 440939.22 440315.41 440205.22 439496.74
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Max Bandwidth - 2:1 Reads-Writes r1 r2b r4 r5 r3 r1a r2a 100K 200K 300K 400K 500K SE +/- 33.49, N = 3 SE +/- 51.02, N = 3 SE +/- 8.60, N = 3 SE +/- 53.22, N = 3 SE +/- 89.89, N = 3 SE +/- 129.26, N = 3 SE +/- 54.98, N = 3 459455.38 459226.53 458790.96 458756.46 457141.24 456629.89 456545.88
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Max Bandwidth - 3:1 Reads-Writes r1 r2b r4 r5 r3 r2a r1a 90K 180K 270K 360K 450K SE +/- 105.41, N = 3 SE +/- 71.38, N = 3 SE +/- 67.02, N = 3 SE +/- 133.64, N = 3 SE +/- 109.66, N = 3 SE +/- 392.90, N = 3 SE +/- 465.24, N = 3 426148.96 425997.22 425848.09 425467.51 424925.84 424818.83 424612.62
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Max Bandwidth - All Reads r2a r1a r3 r4 r2b r5 r1 80K 160K 240K 320K 400K SE +/- 107.35, N = 3 SE +/- 142.76, N = 3 SE +/- 59.61, N = 3 SE +/- 83.70, N = 3 SE +/- 83.63, N = 3 SE +/- 46.23, N = 3 SE +/- 67.01, N = 3 358456.09 358364.56 358268.00 357925.98 357774.43 357550.82 357285.28
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K r1a r1 r4 r3 r2b 4 8 12 16 20 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.07, N = 12 SE +/- 0.06, N = 3 15.19 15.09 6.00 5.97 5.97 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU r2b r1a r3 r1 r4 200 400 600 800 1000 SE +/- 0.61, N = 3 SE +/- 1.56, N = 3 SE +/- 0.83, N = 3 SE +/- 7.01, N = 3 SE +/- 16.86, N = 14 791.70 793.36 793.92 804.39 811.94 MIN: 769.61 MIN: 765.14 MIN: 769 MIN: 763.49 MIN: 761.61 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K r1 r1a r4 r2b r3 7 14 21 28 35 SE +/- 0.19, N = 3 SE +/- 0.29, N = 5 SE +/- 0.17, N = 3 SE +/- 0.08, N = 15 SE +/- 0.12, N = 15 29.20 28.99 12.10 12.03 11.94 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
GNU GMP GMPbench GMPbench is a test of the GNU Multiple Precision Arithmetic (GMP) Library. GMPbench is a single-threaded integer benchmark that leverages the GMP library to stress the CPU with widening integer multiplication. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GMPbench Score, More Is Better GNU GMP GMPbench 6.2.1 Total Time r1a r1 r4 r2b r3 1000 2000 3000 4000 5000 4642.8 4642.1 4525.7 4524.5 4504.5 1. (CC) gcc options: -O3 -fomit-frame-pointer -lm
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.92 Blend File: Barbershop - Compute: CPU-Only r4 r2b 20 40 60 80 100 SE +/- 0.59, N = 3 SE +/- 0.18, N = 3 109.96 110.02
Timed Node.js Compilation This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 15.11 Time To Compile r1a r1 r2b r4 r3 30 60 90 120 150 SE +/- 0.29, N = 3 SE +/- 0.27, N = 3 SE +/- 0.50, N = 3 SE +/- 0.78, N = 3 SE +/- 0.68, N = 3 100.45 101.10 110.93 111.67 111.79
ViennaCL ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-TT r1a r1 r4 r3 r2b 20 40 60 80 100 SE +/- 0.90, N = 3 SE +/- 1.45, N = 13 SE +/- 2.94, N = 15 SE +/- 2.33, N = 15 SE +/- 1.75, N = 15 77.2 76.3 63.7 61.7 54.7 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-TN r1a r1 r4 r3 r2b 20 40 60 80 100 SE +/- 0.69, N = 3 SE +/- 1.67, N = 13 SE +/- 2.43, N = 14 SE +/- 1.88, N = 15 SE +/- 2.02, N = 15 77.4 76.0 67.6 66.9 62.3 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-NT r1a r1 r4 r3 r2b 20 40 60 80 100 SE +/- 1.01, N = 3 SE +/- 1.88, N = 13 SE +/- 1.98, N = 15 SE +/- 1.99, N = 15 SE +/- 1.14, N = 15 76.8 75.6 72.4 68.9 59.8 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GFLOPs/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMM-NN r1 r1a r4 r3 r2b 16 32 48 64 80 SE +/- 1.42, N = 14 SE +/- 3.11, N = 3 SE +/- 1.95, N = 15 SE +/- 2.18, N = 15 SE +/- 2.06, N = 15 73.5 72.3 70.8 66.4 61.9 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMV-T r1 r4 r3 r2b r1a 160 320 480 640 800 SE +/- 2.46, N = 13 SE +/- 3.20, N = 15 SE +/- 2.02, N = 15 SE +/- 27.49, N = 15 SE +/- 5.04, N = 3 719.0 647.0 647.0 389.9 319.0 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dGEMV-N r1 r4 r3 r1a r2b 16 32 48 64 80 SE +/- 0.36, N = 14 SE +/- 0.25, N = 15 SE +/- 3.93, N = 15 SE +/- 2.90, N = 3 SE +/- 3.75, N = 15 72.3 70.2 64.3 63.6 62.3 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dDOT r4 r1 r3 r2b r1a 160 320 480 640 800 SE +/- 2.76, N = 15 SE +/- 6.43, N = 14 SE +/- 50.57, N = 15 SE +/- 34.40, N = 14 SE +/- 34.44, N = 3 765.00 720.00 713.47 447.65 371.00 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dAXPY r4 r1 r3 r2b r1a 200 400 600 800 1000 SE +/- 5.62, N = 15 SE +/- 20.63, N = 14 SE +/- 82.34, N = 15 SE +/- 40.80, N = 15 SE +/- 23.02, N = 3 1158.0 1058.0 1024.2 507.1 392.0 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - dCOPY r4 r3 r1 r2b r1a 200 400 600 800 1000 SE +/- 9.73, N = 15 SE +/- 26.97, N = 15 SE +/- 25.47, N = 14 SE +/- 35.11, N = 15 SE +/- 29.90, N = 3 936.0 913.0 843.0 422.2 335.0 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - sDOT r1 r4 r3 r2b r1a 130 260 390 520 650 SE +/- 2.34, N = 14 SE +/- 2.45, N = 15 SE +/- 2.55, N = 15 SE +/- 5.60, N = 15 SE +/- 11.67, N = 3 620 535 532 349 277 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - sAXPY r1 r3 r4 r2b r1a 200 400 600 800 1000 SE +/- 6.62, N = 14 SE +/- 8.11, N = 15 SE +/- 11.35, N = 15 SE +/- 10.36, N = 15 SE +/- 15.25, N = 3 1003 862 855 474 370 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.org GB/s, More Is Better ViennaCL 1.7.1 Test: CPU BLAS - sCOPY r1 r4 r3 r2b r1a 400 800 1200 1600 2000 SE +/- 16.63, N = 14 SE +/- 54.62, N = 15 SE +/- 51.32, N = 15 SE +/- 22.07, N = 15 SE +/- 4.10, N = 3 1834 1167 1135 691 504 1. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
Xmrig Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org H/s, More Is Better Xmrig 6.12.1 Variant: Monero - Hash Count: 1M r3 r4 r1a r2b r1 4K 8K 12K 16K 20K SE +/- 245.77, N = 3 SE +/- 243.31, N = 15 SE +/- 20.55, N = 3 SE +/- 151.73, N = 3 SE +/- 23.28, N = 3 20652.9 20574.6 19452.0 19311.1 19299.5 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p r1a r2b r4 r3 5 10 15 20 25 SE +/- 0.17, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.06, N = 3 21.25 7.45 7.43 7.38 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Sysbench This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Events Per Second, More Is Better Sysbench 1.0.20 Test: CPU r4 r2b 50K 100K 150K 200K 250K SE +/- 269.51, N = 3 SE +/- 247.29, N = 3 214241.34 214210.83 1. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.92 Blend File: Pabellon Barcelona - Compute: CPU-Only r2b r4 20 40 60 80 100 SE +/- 0.08, N = 3 SE +/- 0.28, N = 3 88.57 88.68
Timed Wasmer Compilation This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 1.0.2 Time To Compile r1a r1 r4 r3 r2b 16 32 48 64 80 SE +/- 0.62, N = 3 SE +/- 0.22, N = 3 SE +/- 0.51, N = 3 SE +/- 0.66, N = 7 SE +/- 0.42, N = 3 61.93 62.16 70.76 71.13 71.93 1. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lrt -lpthread -lgcc_s -lc -lm -lutil
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU r4 r3 r1 r1a r2b 200 400 600 800 1000 SE +/- 2.67, N = 3 SE +/- 1.09, N = 3 SE +/- 7.46, N = 3 SE +/- 4.49, N = 3 SE +/- 9.76, N = 3 792.30 796.69 801.41 804.32 808.29 MIN: 763.96 MIN: 771.28 MIN: 767.38 MIN: 765.37 MIN: 767.97 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU r2b r1a r4 r1 r3 200 400 600 800 1000 SE +/- 1.48, N = 3 SE +/- 3.65, N = 3 SE +/- 1.96, N = 3 SE +/- 2.07, N = 3 SE +/- 2.18, N = 3 789.84 791.93 792.05 792.83 793.08 MIN: 767.03 MIN: 765.01 MIN: 765.9 MIN: 763.76 MIN: 768.2 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU r1 r1a r2b r4 r3 0.2801 0.5602 0.8403 1.1204 1.4005 SE +/- 0.01080, N = 15 SE +/- 0.01126, N = 15 SE +/- 0.01174, N = 15 SE +/- 0.00891, N = 15 SE +/- 0.01066, N = 15 1.21594 1.22278 1.23796 1.24116 1.24508 MIN: 0.84 MIN: 0.85 MIN: 0.87 MIN: 0.85 MIN: 0.89 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU r2b r4 r1a r1 r3 100 200 300 400 500 SE +/- 0.78, N = 3 SE +/- 1.10, N = 3 SE +/- 0.90, N = 3 SE +/- 0.58, N = 3 SE +/- 2.40, N = 3 446.39 446.54 447.31 447.97 450.65 MIN: 432.04 MIN: 429.71 MIN: 432.33 MIN: 433.22 MIN: 432.96 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU r1 r1a r3 r2b r4 100 200 300 400 500 SE +/- 0.58, N = 3 SE +/- 1.79, N = 3 SE +/- 1.24, N = 3 SE +/- 0.65, N = 3 SE +/- 3.51, N = 3 445.14 446.94 447.14 447.29 448.91 MIN: 431.52 MIN: 430.47 MIN: 432.42 MIN: 433.06 MIN: 431.33 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU r1 r3 r1a r2b r4 100 200 300 400 500 SE +/- 0.85, N = 3 SE +/- 0.04, N = 3 SE +/- 2.18, N = 3 SE +/- 1.13, N = 3 SE +/- 2.63, N = 3 445.52 446.92 447.44 447.70 447.96 MIN: 431.18 MIN: 433.64 MIN: 429.4 MIN: 433.04 MIN: 429.99 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K r1 r1a r4 r2b r3 8 16 24 32 40 SE +/- 0.28, N = 3 SE +/- 0.28, N = 3 SE +/- 0.08, N = 3 SE +/- 0.15, N = 15 SE +/- 0.18, N = 4 33.07 32.51 14.73 14.30 14.06 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.92 Blend File: Classroom - Compute: CPU-Only r2b r4 16 32 48 64 80 SE +/- 0.08, N = 3 SE +/- 0.13, N = 3 71.78 72.29
KTX-Software toktx This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better KTX-Software toktx 4.0 Settings: UASTC 4 + Zstd Compression 19 r2b r4 13 26 39 52 65 SE +/- 0.68, N = 4 SE +/- 0.74, N = 3 56.66 56.77
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU r1 r1a r3 r2b r4 7 14 21 28 35 SE +/- 0.02080, N = 3 SE +/- 0.01835, N = 3 SE +/- 0.30585, N = 15 SE +/- 0.31773, N = 13 SE +/- 0.38629, N = 12 7.49467 7.50059 28.18150 28.40230 28.46130 MIN: 6.98 MIN: 6.91 MIN: 14.34 MIN: 14.66 MIN: 14.76 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
LuxCoreRender LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.5 Scene: Danish Mood - Acceleration: CPU r1a r1 r2b r4 r3 2 4 6 8 10 SE +/- 0.10, N = 3 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 SE +/- 0.04, N = 3 SE +/- 0.07, N = 3 7.55 7.42 5.73 5.68 5.65 MIN: 3.28 / MAX: 8.86 MIN: 3.2 / MAX: 8.74 MIN: 1.3 / MAX: 7.65 MIN: 1.26 / MAX: 7.6 MIN: 1.24 / MAX: 7.63
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.5 Scene: LuxCore Benchmark - Acceleration: CPU r1a r1 r3 r4 r2b 2 4 6 8 10 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 8.04 7.84 5.92 5.87 5.84 MIN: 3.51 / MAX: 9.33 MIN: 3.44 / MAX: 9.2 MIN: 1.15 / MAX: 7.98 MIN: 1.15 / MAX: 7.95 MIN: 1.16 / MAX: 7.97
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p r1a r4 r3 r2b 0.1148 0.2296 0.3444 0.4592 0.574 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.51 0.33 0.33 0.32 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
LuxCoreRender LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.5 Scene: Rainbow Colors and Prism - Acceleration: CPU r1 r3 r4 r2b r1a 4 8 12 16 20 SE +/- 1.05, N = 15 SE +/- 1.13, N = 12 SE +/- 0.79, N = 12 SE +/- 0.87, N = 13 SE +/- 0.47, N = 15 17.04 16.47 14.79 13.42 13.34 MIN: 11.27 / MAX: 22.05 MIN: 10.39 / MAX: 21.43 MIN: 9.85 / MAX: 20.95 MIN: 8.28 / MAX: 21.15 MIN: 10.32 / MAX: 17.45
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p r1a r4 r3 r2b 7 14 21 28 35 SE +/- 0.06, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 28.66 10.54 10.39 10.39 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.92 Blend File: Fishy Cat - Compute: CPU-Only r2b r4 11 22 33 44 55 SE +/- 0.15, N = 3 SE +/- 0.25, N = 3 46.38 46.73
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU r1 r1a r2b r4 r3 0.6771 1.3542 2.0313 2.7084 3.3855 SE +/- 0.00128, N = 3 SE +/- 0.00276, N = 3 SE +/- 0.02287, N = 13 SE +/- 0.02449, N = 14 SE +/- 0.02478, N = 14 2.96135 2.96857 3.00464 3.00907 3.00929 MIN: 2.84 MIN: 2.84 MIN: 2.84 MIN: 2.84 MIN: 2.84 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
VOSK Speech Recognition Toolkit VOSK is an open-source offline speech recognition API/toolkit. VOSK supports speech recognition in 17 languages and has a variety of models available and interfaces for different programming languages. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better VOSK Speech Recognition Toolkit 0.3.21 r1a r4 r3 r1 r2b 8 16 24 32 40 SE +/- 0.29, N = 8 SE +/- 0.32, N = 3 SE +/- 0.43, N = 3 SE +/- 0.32, N = 3 SE +/- 0.43, N = 3 35.01 35.50 35.58 35.92 36.42
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 13 Total Time r3 r1a r4 r1 r2b 40M 80M 120M 160M 200M SE +/- 1924842.52, N = 3 SE +/- 2404481.41, N = 3 SE +/- 2183262.34, N = 4 SE +/- 1585265.68, N = 15 SE +/- 1982639.48, N = 3 189214499 186263552 186013261 181644819 181554218 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
srsLTE srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org UE Mb/s, More Is Better srsLTE 20.10.1 Test: PHY_DL_Test r4 r1a r1 r3 r2b 20 40 60 80 100 SE +/- 0.62, N = 3 SE +/- 1.16, N = 3 SE +/- 0.76, N = 3 SE +/- 1.14, N = 3 SE +/- 0.38, N = 3 78.3 77.3 76.9 76.1 75.0 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
OpenBenchmarking.org eNb Mb/s, More Is Better srsLTE 20.10.1 Test: PHY_DL_Test r1a r4 r1 r3 r2b 40 80 120 160 200 SE +/- 0.36, N = 3 SE +/- 0.58, N = 3 SE +/- 1.15, N = 3 SE +/- 2.42, N = 3 SE +/- 1.23, N = 3 184.2 183.7 183.4 181.6 181.6 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
srsLTE srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Samples / Second, More Is Better srsLTE 20.10.1 Test: OFDM_Test r3 r2b r4 r1 r1a 30M 60M 90M 120M 150M SE +/- 600925.21, N = 3 SE +/- 366666.67, N = 3 SE +/- 233333.33, N = 3 SE +/- 611010.09, N = 3 SE +/- 240370.09, N = 3 120833333 120733333 120666667 120300000 120133333 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
Sysbench This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/sec, More Is Better Sysbench 1.0.20 Test: RAM / Memory r4 r2b 3K 6K 9K 12K 15K SE +/- 118.72, N = 15 SE +/- 125.16, N = 15 12553.44 12510.56 1. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
Botan Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: AES-256 - Decrypt r1a r1 r2b r3 r4 1200 2400 3600 4800 6000 SE +/- 0.12, N = 3 SE +/- 1.20, N = 3 SE +/- 0.94, N = 3 SE +/- 1.10, N = 3 SE +/- 12.66, N = 3 5663.61 5663.06 5662.76 5662.34 5650.14 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: AES-256 r1a r1 r4 r2b r3 1200 2400 3600 4800 6000 SE +/- 0.28, N = 3 SE +/- 0.92, N = 3 SE +/- 51.03, N = 3 SE +/- 55.60, N = 3 SE +/- 42.23, N = 3 5670.81 5669.70 5612.00 5606.97 5593.37 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
Basis Universal Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: ETC1S r2b r4 8 16 24 32 40 SE +/- 0.21, N = 3 SE +/- 0.42, N = 3 34.24 34.42 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 0 r4 r2b 3 6 9 12 15 SE +/- 0.08, N = 3 SE +/- 0.08, N = 15 11.23 11.25 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p r1a r3 r2b r4 30 60 90 120 150 SE +/- 0.82, N = 15 SE +/- 0.31, N = 15 SE +/- 0.49, N = 3 SE +/- 0.28, N = 3 125.25 43.42 43.26 42.37 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.92 Blend File: BMW27 - Compute: CPU-Only r2b r4 7 14 21 28 35 SE +/- 0.08, N = 3 SE +/- 0.32, N = 3 29.56 29.69
Botan Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: ChaCha20Poly1305 - Decrypt r1a r1 r4 r2b r3 130 260 390 520 650 SE +/- 0.57, N = 3 SE +/- 0.40, N = 3 SE +/- 2.81, N = 3 SE +/- 3.49, N = 3 SE +/- 3.74, N = 3 619.54 619.46 615.98 612.44 612.15 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: ChaCha20Poly1305 r1 r1a r4 r3 r2b 130 260 390 520 650 SE +/- 0.03, N = 3 SE +/- 0.17, N = 3 SE +/- 2.98, N = 3 SE +/- 3.19, N = 3 SE +/- 3.48, N = 3 623.49 623.20 619.64 616.50 615.81 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Blowfish - Decrypt r1a r3 r4 r1 r2b 80 160 240 320 400 SE +/- 0.06, N = 3 SE +/- 0.03, N = 3 SE +/- 0.07, N = 3 SE +/- 0.05, N = 3 SE +/- 0.04, N = 3 363.33 363.31 363.28 363.26 363.20 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Blowfish r1a r1 r2b r4 r3 80 160 240 320 400 SE +/- 0.05, N = 3 SE +/- 0.56, N = 3 SE +/- 0.11, N = 3 SE +/- 3.51, N = 3 SE +/- 3.73, N = 3 363.62 363.04 362.93 359.57 359.45 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Twofish - Decrypt r3 r1 r4 r2b r1a 60 120 180 240 300 SE +/- 0.06, N = 3 SE +/- 0.14, N = 3 SE +/- 0.04, N = 3 SE +/- 0.12, N = 3 SE +/- 0.11, N = 3 292.83 292.74 292.61 292.40 292.37 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: Twofish r1 r1a r2b r3 r4 60 120 180 240 300 SE +/- 0.14, N = 3 SE +/- 0.14, N = 3 SE +/- 0.11, N = 3 SE +/- 2.66, N = 3 SE +/- 2.83, N = 3 289.13 288.85 288.56 286.18 286.00 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: CAST-256 - Decrypt r2b r1 r4 r1a r3 30 60 90 120 150 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.35, N = 3 116.08 116.07 116.07 116.07 115.72 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: CAST-256 r1 r1a r2b r4 r3 30 60 90 120 150 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 1.15, N = 3 SE +/- 1.17, N = 3 SE +/- 1.33, N = 3 115.97 115.97 114.66 114.65 114.52 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: KASUMI - Decrypt r1 r3 r4 r1a r2b 20 40 60 80 100 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 74.32 74.31 74.29 74.29 74.28 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.org MiB/s, More Is Better Botan 2.17.3 Test: KASUMI r1a r1 r3 r4 r2b 20 40 60 80 100 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 SE +/- 0.77, N = 3 SE +/- 0.87, N = 3 SE +/- 1.01, N = 3 77.31 77.29 76.41 76.40 76.29 1. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
Xmrig Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org H/s, More Is Better Xmrig 6.12.1 Variant: Wownero - Hash Count: 1M r1a r4 r2b r3 r1 11K 22K 33K 44K 55K SE +/- 588.34, N = 3 SE +/- 235.04, N = 3 SE +/- 238.38, N = 3 SE +/- 358.18, N = 3 SE +/- 425.40, N = 7 50166.1 49937.3 49908.3 49813.4 48051.5 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Intel Memory Latency Checker Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Peak Injection Bandwidth - Stream-Triad Like r1 r5 r3 r2b r4 r1a r2a 70K 140K 210K 280K 350K SE +/- 177.93, N = 3 SE +/- 55.81, N = 3 SE +/- 32.03, N = 3 SE +/- 12.95, N = 3 SE +/- 60.42, N = 3 SE +/- 38.10, N = 3 SE +/- 34.05, N = 3 324377.2 324234.5 324227.4 324209.8 324112.8 323924.2 323826.9
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Peak Injection Bandwidth - 1:1 Reads-Writes r3 r5 r4 r1a r1 r2a r2b 100K 200K 300K 400K 500K SE +/- 138.13, N = 3 SE +/- 847.23, N = 3 SE +/- 1601.80, N = 3 SE +/- 148.63, N = 3 SE +/- 1187.16, N = 3 SE +/- 212.40, N = 3 SE +/- 314.54, N = 3 449554.1 448800.1 446396.0 442843.2 442422.3 442144.2 440454.7
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Peak Injection Bandwidth - 2:1 Reads-Writes r2b r1 r4 r5 r3 r2a r1a 100K 200K 300K 400K 500K SE +/- 64.32, N = 3 SE +/- 274.15, N = 3 SE +/- 36.24, N = 3 SE +/- 12.06, N = 3 SE +/- 73.04, N = 3 SE +/- 115.55, N = 3 SE +/- 130.28, N = 3 459309.8 459038.6 458941.9 458830.6 457190.5 456408.6 456260.3
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Peak Injection Bandwidth - 3:1 Reads-Writes r1 r2b r4 r5 r3 r1a r2a 90K 180K 270K 360K 450K SE +/- 163.24, N = 3 SE +/- 25.04, N = 3 SE +/- 23.30, N = 3 SE +/- 23.30, N = 3 SE +/- 88.34, N = 3 SE +/- 94.95, N = 3 SE +/- 236.99, N = 3 425933.7 425925.6 425822.1 425508.1 424904.5 424096.6 424077.3
OpenBenchmarking.org MB/s, More Is Better Intel Memory Latency Checker Test: Peak Injection Bandwidth - All Reads r3 r1a r2a r4 r2b r5 r1 80K 160K 240K 320K 400K SE +/- 24.95, N = 3 SE +/- 14.58, N = 3 SE +/- 37.47, N = 3 SE +/- 26.62, N = 3 SE +/- 14.54, N = 3 SE +/- 23.85, N = 3 SE +/- 709.43, N = 3 358463.7 358385.5 358269.7 358110.5 357742.9 357722.7 356476.2
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU r1 r4 r1a r2b r3 0.0769 0.1538 0.2307 0.3076 0.3845 SE +/- 0.000853, N = 3 SE +/- 0.004121, N = 3 SE +/- 0.002562, N = 3 SE +/- 0.003448, N = 5 SE +/- 0.003372, N = 6 0.338327 0.340243 0.341663 0.341893 0.341955 MIN: 0.3 MIN: 0.3 MIN: 0.31 MIN: 0.3 MIN: 0.31 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU r1a r4 r1 r3 r2b 0.0488 0.0976 0.1464 0.1952 0.244 SE +/- 0.000781, N = 3 SE +/- 0.001544, N = 12 SE +/- 0.000867, N = 3 SE +/- 0.002019, N = 7 SE +/- 0.001893, N = 8 0.213643 0.215085 0.215115 0.216586 0.216806 MIN: 0.19 MIN: 0.19 MIN: 0.19 MIN: 0.19 MIN: 0.19 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
libjpeg-turbo tjbench tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Megapixels/sec, More Is Better libjpeg-turbo tjbench 2.1.0 Test: Decompression Throughput r1 r2b r4 r3 r1a 40 80 120 160 200 SE +/- 0.15, N = 3 SE +/- 0.07, N = 3 SE +/- 0.47, N = 3 SE +/- 1.04, N = 3 SE +/- 0.39, N = 3 161.63 160.26 159.24 159.19 156.97 1. (CC) gcc options: -O3 -rdynamic
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Thorough r2b r4 3 6 9 12 15 SE +/- 0.0796, N = 8 SE +/- 0.0879, N = 7 9.2907 9.3091 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Medium r4 r2b 2 4 6 8 10 SE +/- 0.0290, N = 3 SE +/- 0.0906, N = 15 7.1472 7.1887 1. (CXX) g++ options: -O3 -flto -pthread
Timed Mesa Compilation This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Mesa Compilation 21.0 Time To Compile r1a r1 r4 r3 r2b 5 10 15 20 25 SE +/- 0.12, N = 3 SE +/- 0.02, N = 3 SE +/- 0.11, N = 3 SE +/- 0.15, N = 3 SE +/- 0.04, N = 3 20.38 20.95 21.31 21.37 21.58
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU r1 r2b r1a r4 r3 0.8015 1.603 2.4045 3.206 4.0075 SE +/- 0.00193, N = 3 SE +/- 0.00854, N = 3 SE +/- 0.00732, N = 3 SE +/- 0.00650, N = 3 SE +/- 0.01280, N = 3 3.53026 3.53121 3.54367 3.54783 3.56224 MIN: 3.38 MIN: 3.37 MIN: 3.38 MIN: 3.37 MIN: 3.39 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU r1a r1 r4 r2b r3 0.0915 0.183 0.2745 0.366 0.4575 SE +/- 0.001124, N = 3 SE +/- 0.001135, N = 3 SE +/- 0.002415, N = 14 SE +/- 0.004259, N = 4 SE +/- 0.003204, N = 10 0.395588 0.398282 0.402919 0.403409 0.406877 MIN: 0.36 MIN: 0.37 MIN: 0.36 MIN: 0.36 MIN: 0.37 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 1080p r1a r1 r3 r4 r2b 9 18 27 36 45 SE +/- 0.24, N = 3 SE +/- 0.29, N = 3 SE +/- 0.14, N = 3 SE +/- 0.31, N = 3 SE +/- 0.09, N = 3 37.34 36.91 28.22 28.01 27.80 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
AOM AV1 This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.0 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p r1a r4 r2b r3 20 40 60 80 100 SE +/- 1.01, N = 15 SE +/- 0.27, N = 3 SE +/- 0.19, N = 3 SE +/- 0.26, N = 3 103.92 36.35 36.20 36.06 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 160 - Buffer Length: 256 - Filter Length: 57 r1a r1 r3 r4 r2b 700M 1400M 2100M 2800M 3500M SE +/- 2062630.47, N = 3 SE +/- 17047384.94, N = 3 SE +/- 14901789.60, N = 3 SE +/- 16411005.79, N = 3 SE +/- 14685858.66, N = 3 3162066667 3144800000 3143300000 3140266667 3131866667 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 128 - Buffer Length: 256 - Filter Length: 57 r1 r3 r2b r4 r1a 700M 1400M 2100M 2800M 3500M SE +/- 8088331.79, N = 3 SE +/- 6896617.53, N = 3 SE +/- 14312737.14, N = 3 SE +/- 16537936.19, N = 3 SE +/- 38975091.76, N = 3 3415933333 3411000000 3400066667 3398800000 3352733333 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 64 - Buffer Length: 256 - Filter Length: 57 r1 r1a r4 r3 r2b 700M 1400M 2100M 2800M 3500M SE +/- 5206513.02, N = 3 SE +/- 2150193.79, N = 3 SE +/- 12876378.03, N = 3 SE +/- 14893734.70, N = 3 SE +/- 17049079.48, N = 3 3267133333 3263700000 3245666667 3232700000 3227433333 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 32 - Buffer Length: 256 - Filter Length: 57 r1a r1 r3 r2b r4 400M 800M 1200M 1600M 2000M SE +/- 2515949.13, N = 3 SE +/- 3951371.07, N = 3 SE +/- 10121648.97, N = 3 SE +/- 6582552.70, N = 3 1736800000 1735100000 1704500000 1699333333 1697500000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 16 - Buffer Length: 256 - Filter Length: 57 r1a r1 r3 r2b r4 200M 400M 600M 800M 1000M SE +/- 669162.00, N = 3 SE +/- 691953.76, N = 3 SE +/- 859903.10, N = 3 SE +/- 3620722.76, N = 3 SE +/- 10609570.10, N = 3 890273333 885320000 865410000 862890000 860046667 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 8 - Buffer Length: 256 - Filter Length: 57 r1 r3 r4 r2b 90M 180M 270M 360M 450M SE +/- 422150.58, N = 3 SE +/- 1240739.03, N = 3 SE +/- 2739929.03, N = 3 SE +/- 2458908.97, N = 3 441953333 432170000 432013333 428100000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 4 - Buffer Length: 256 - Filter Length: 57 r1 r4 r3 r2b 50M 100M 150M 200M 250M SE +/- 1090112.12, N = 3 SE +/- 1956802.95, N = 3 SE +/- 1663583.82, N = 3 SE +/- 824809.74, N = 3 217643333 216773333 215343333 213203333 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 2 - Buffer Length: 256 - Filter Length: 57 r3 r1 r2b r4 20M 40M 60M 80M 100M SE +/- 430348.70, N = 3 SE +/- 729984.78, N = 3 SE +/- 907677.13, N = 3 SE +/- 132035.35, N = 3 111510000 110713333 110173333 109430000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 1 - Buffer Length: 256 - Filter Length: 57 r1 r3 r2b r4 12M 24M 36M 48M 60M SE +/- 173700.89, N = 3 SE +/- 550708.74, N = 3 SE +/- 613156.95, N = 3 SE +/- 534784.17, N = 3 57792000 57197667 56230333 55251667 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
KTX-Software toktx This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better KTX-Software toktx 4.0 Settings: Zstd Compression 19 r2b r4 5 10 15 20 25 SE +/- 0.22, N = 3 SE +/- 0.20, N = 3 19.78 20.08
Basis Universal Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 3 r2b r4 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 17.16 17.19 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU r1 r1a r4 r2b r3 0.0547 0.1094 0.1641 0.2188 0.2735 SE +/- 0.000856, N = 3 SE +/- 0.000662, N = 3 SE +/- 0.002245, N = 7 SE +/- 0.003187, N = 3 SE +/- 0.002507, N = 5 0.239989 0.240122 0.242450 0.243026 0.243308 MIN: 0.22 MIN: 0.23 MIN: 0.22 MIN: 0.22 MIN: 0.22 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 1080p r1a r1 r3 r4 r2b 90 180 270 360 450 SE +/- 16.03, N = 12 SE +/- 15.40, N = 12 SE +/- 1.57, N = 3 SE +/- 0.65, N = 3 SE +/- 4.05, N = 12 393.46 386.29 185.53 184.07 182.26 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
KTX-Software toktx This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better KTX-Software toktx 4.0 Settings: UASTC 3 r4 r2b 1.2744 2.5488 3.8232 5.0976 6.372 SE +/- 0.008, N = 3 SE +/- 0.053, N = 15 5.562 5.664
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU r3 r4 r1 r1a r2b 0.282 0.564 0.846 1.128 1.41 SE +/- 0.01211, N = 3 SE +/- 0.01282, N = 3 SE +/- 0.00180, N = 3 SE +/- 0.01592, N = 15 SE +/- 0.00964, N = 3 1.24176 1.24222 1.24809 1.25267 1.25313 MIN: 1.18 MIN: 1.19 MIN: 1.2 MIN: 1.19 MIN: 1.2 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU r1a r1 r3 r4 r2b 0.2123 0.4246 0.6369 0.8492 1.0615 SE +/- 0.002111, N = 3 SE +/- 0.002101, N = 3 SE +/- 0.007264, N = 3 SE +/- 0.008450, N = 3 SE +/- 0.011253, N = 3 0.912279 0.918568 0.936941 0.940714 0.943624 MIN: 0.86 MIN: 0.85 MIN: 0.85 MIN: 0.86 MIN: 0.86 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Basis Universal Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 2 r2b r4 4 8 12 16 20 SE +/- 0.18, N = 3 SE +/- 0.15, N = 3 13.98 14.16 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 193 Cells Per Direction r1a r1 r2b r3 r4 4 8 12 16 20 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 11.27 11.36 11.56 14.60 14.66 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
KTX-Software toktx This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better KTX-Software toktx 4.0 Settings: UASTC 3 + Zstd Compression 19 r2b r4 3 6 9 12 15 SE +/- 0.06, N = 3 SE +/- 0.11, N = 5 10.01 10.03
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU r2b r1a r1 r4 r3 0.0491 0.0982 0.1473 0.1964 0.2455 SE +/- 0.004449, N = 15 SE +/- 0.001109, N = 3 SE +/- 0.002205, N = 15 SE +/- 0.004970, N = 15 SE +/- 0.003384, N = 15 0.210324 0.210728 0.210919 0.217941 0.218349 MIN: 0.18 MIN: 0.2 MIN: 0.19 MIN: 0.19 MIN: 0.19 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU r1 r1a r4 r2b r3 0.1355 0.271 0.4065 0.542 0.6775 SE +/- 0.001703, N = 3 SE +/- 0.000780, N = 3 SE +/- 0.003648, N = 3 SE +/- 0.004180, N = 3 SE +/- 0.004400, N = 3 0.593042 0.595661 0.602038 0.602122 0.602314 MIN: 0.56 MIN: 0.56 MIN: 0.56 MIN: 0.56 MIN: 0.56 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction r1a r1 r2b r3 r4 0.8039 1.6078 2.4117 3.2156 4.0195 SE +/- 0.01532048, N = 3 SE +/- 0.00774937, N = 3 SE +/- 0.02799890, N = 3 SE +/- 0.03072276, N = 15 SE +/- 0.02850005, N = 15 2.73859096 2.74370996 3.02281992 3.56592774 3.57278153 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
KTX-Software toktx This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better KTX-Software toktx 4.0 Settings: Zstd Compression 9 r2b r4 0.8318 1.6636 2.4954 3.3272 4.159 SE +/- 0.003, N = 3 SE +/- 0.064, N = 15 3.470 3.697
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU r1 r1a r3 r2b r4 0.8197 1.6394 2.4591 3.2788 4.0985 SE +/- 0.00924, N = 3 SE +/- 0.00795, N = 3 SE +/- 0.05675, N = 14 SE +/- 0.05421, N = 14 SE +/- 0.05617, N = 14 3.57247 3.57662 3.64033 3.64232 3.64319 MIN: 3.53 MIN: 3.5 MIN: 3.47 MIN: 3.51 MIN: 3.5 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU r1a r1 r2b r3 r4 0.1972 0.3944 0.5916 0.7888 0.986 SE +/- 0.002055, N = 3 SE +/- 0.002419, N = 3 SE +/- 0.008361, N = 14 SE +/- 0.007890, N = 14 SE +/- 0.007461, N = 14 0.863214 0.864164 0.874080 0.874968 0.876227 MIN: 0.84 MIN: 0.84 MIN: 0.83 MIN: 0.84 MIN: 0.84 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Google Draco Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Google Draco 1.4.1 Model: Church Facade r2b r4 1500 3000 4500 6000 7500 SE +/- 20.01, N = 3 SE +/- 3.33, N = 3 7001 7082 1. (CXX) g++ options: -O3
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU r1a r1 r2b r4 r3 0.4148 0.8296 1.2444 1.6592 2.074 SE +/- 0.00121, N = 3 SE +/- 0.00580, N = 3 SE +/- 0.01382, N = 3 SE +/- 0.00968, N = 3 SE +/- 0.02043, N = 3 1.79881 1.80046 1.81774 1.81913 1.84339 MIN: 1.69 MIN: 1.68 MIN: 1.69 MIN: 1.68 MIN: 1.67 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Google Draco Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Google Draco 1.4.1 Model: Lion r2b r4 1300 2600 3900 5200 6500 SE +/- 25.21, N = 3 SE +/- 21.15, N = 3 6126 6170 1. (CXX) g++ options: -O3
OpenBenchmarking.org ms, Fewer Is Better toyBrot Fractal Generator 2020-11-18 Implementation: C++ Threads r1a r1 r4 r2b r3 1500 3000 4500 6000 7500 SE +/- 29.96, N = 3 SE +/- 49.12, N = 3 SE +/- 76.94, N = 4 SE +/- 89.67, N = 3 SE +/- 98.76, N = 3 6980 7018 7141 7149 7203 1. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p r1a r1 r3 r2b r4 70 140 210 280 350 SE +/- 1.10, N = 3 SE +/- 1.20, N = 3 SE +/- 1.63, N = 3 SE +/- 1.13, N = 3 SE +/- 1.59, N = 3 329.53 327.87 164.51 164.32 162.21 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p r1a r1 r2b r3 r4 90 180 270 360 450 SE +/- 0.66, N = 3 SE +/- 1.44, N = 3 SE +/- 0.90, N = 3 SE +/- 2.25, N = 3 SE +/- 0.47, N = 3 408.24 401.29 182.17 181.52 179.13 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU r1 r4 r2b r1a r3 0.2578 0.5156 0.7734 1.0312 1.289 SE +/- 0.00274, N = 3 SE +/- 0.01182, N = 3 SE +/- 0.00330, N = 3 SE +/- 0.00124, N = 3 SE +/- 0.00975, N = 3 1.10991 1.11811 1.11874 1.12224 1.14578 MIN: 1.02 MIN: 1.02 MIN: 1.02 MIN: 1.02 MIN: 1.04 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU r2b r4 r1 r1a r3 0.2029 0.4058 0.6087 0.8116 1.0145 SE +/- 0.004902, N = 3 SE +/- 0.005244, N = 3 SE +/- 0.006225, N = 3 SE +/- 0.003986, N = 3 SE +/- 0.006631, N = 3 0.869978 0.875421 0.877815 0.879137 0.901823 MIN: 0.82 MIN: 0.82 MIN: 0.82 MIN: 0.83 MIN: 0.84 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU r1 r1a r4 r3 r2b 0.4764 0.9528 1.4292 1.9056 2.382 SE +/- 0.00138, N = 3 SE +/- 0.00168, N = 3 SE +/- 0.01801, N = 3 SE +/- 0.01943, N = 3 SE +/- 0.01980, N = 3 2.07944 2.08532 2.10837 2.10841 2.11712 MIN: 2.03 MIN: 2.03 MIN: 2.03 MIN: 2.03 MIN: 2.03 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 1080p r1 r1a r2b r3 r4 110 220 330 440 550 SE +/- 3.80, N = 3 SE +/- 4.78, N = 3 SE +/- 2.64, N = 4 SE +/- 1.80, N = 10 SE +/- 1.14, N = 3 499.23 493.51 234.51 234.39 233.96 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 1080p r1 r1a r2b r3 r4 60 120 180 240 300 SE +/- 1.68, N = 3 SE +/- 1.37, N = 3 SE +/- 1.76, N = 5 SE +/- 1.64, N = 3 SE +/- 1.22, N = 3 290.67 288.99 158.16 157.83 156.26 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
r1 Processor: 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads), Motherboard: Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS), Chipset: Intel Device 0998, Memory: 16 x 32 GB DDR4-3200MT/s Hynix HMA84GR7CJR4N-XN, Disk: 2 x 7682GB INTEL SSDPF2KX076TZ + 2 x 800GB INTEL SSDPF21Q800GB + 3841GB Micron_9300_MTFDHAL3T8TDP + 960GB INTEL SSDSC2KG96, Graphics: ASPEED, Monitor: VE228, Network: 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP
OS: Ubuntu 20.04, Kernel: 5.11.0-051100-generic (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 28 April 2021 08:40 by user root.
r1a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 29 April 2021 06:04 by user root.
r2 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 29 April 2021 16:12 by user root.
r2a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 29 April 2021 16:16 by user root.
r2b Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 29 April 2021 18:24 by user root.
r3 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 30 April 2021 08:26 by user root.
r4 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 30 April 2021 21:13 by user root.
r5 Processor: 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads), Motherboard: Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS), Chipset: Intel Device 0998, Memory: 16 x 32 GB DDR4-3200MT/s Hynix HMA84GR7CJR4N-XN, Disk: 2 x 7682GB INTEL SSDPF2KX076TZ + 2 x 800GB INTEL SSDPF21Q800GB + 3841GB Micron_9300_MTFDHAL3T8TDP + 960GB INTEL SSDSC2KG96, Graphics: ASPEED, Network: 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP
OS: Ubuntu 20.04, Kernel: 5.11.0-051100-generic (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Notes: Python 2.7.18 + Python 3.8.5Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 1 May 2021 07:03 by user root.