Intel Core i9-10885H testing with a HP 8736 (S91 Ver. 01.02.01 BIOS) and NVIDIA Quadro RTX 5000 with Max-Q Design 16GB on Ubuntu 20.04 via the Phoronix Test Suite.
Blender Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: Barbershop - Compute: NVIDIA OptiX r1 r2 r3 300 600 900 1200 1500 SE +/- 0.44, N = 3 SE +/- 0.85, N = 3 SE +/- 2.01, N = 3 1192.96 1190.05 1192.80
Basis Universal Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.12 Settings: UASTC Level 2 + RDO Post-Processing r1 r2 r3 200 400 600 800 1000 SE +/- 0.74, N = 3 SE +/- 0.35, N = 3 SE +/- 0.62, N = 3 840.35 840.32 841.23 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
Blender Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: Barbershop - Compute: CUDA r1 r2 r3 160 320 480 640 800 SE +/- 0.24, N = 3 SE +/- 0.26, N = 3 SE +/- 0.41, N = 3 734.81 731.67 733.02
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: Pabellon Barcelona - Compute: CUDA r1 r2 r3 130 260 390 520 650 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 SE +/- 0.06, N = 3 608.80 609.56 608.62
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: inception-v3 r1 r2 r3 14 28 42 56 70 SE +/- 0.15, N = 10 SE +/- 0.18, N = 11 SE +/- 0.22, N = 10 62.57 63.18 63.56 MIN: 60.82 / MAX: 96.05 MIN: 61.02 / MAX: 104.39 MIN: 60.92 / MAX: 102.85 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: mobilenet-v1-1.0 r1 r2 r3 3 6 9 12 15 SE +/- 0.01, N = 10 SE +/- 0.01, N = 11 SE +/- 0.01, N = 10 10.65 10.68 10.66 MIN: 10.33 / MAX: 34.53 MIN: 10.35 / MAX: 33.35 MIN: 10.33 / MAX: 32.25 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: MobileNetV2_224 r1 r2 r3 1.1905 2.381 3.5715 4.762 5.9525 SE +/- 0.210, N = 10 SE +/- 0.185, N = 11 SE +/- 0.209, N = 10 5.239 5.291 5.285 MIN: 3.19 / MAX: 26.27 MIN: 3.3 / MAX: 27.38 MIN: 3.27 / MAX: 26.82 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: resnet-v2-50 r1 r2 r3 13 26 39 52 65 SE +/- 0.40, N = 10 SE +/- 0.35, N = 11 SE +/- 0.40, N = 10 58.16 58.53 58.79 MIN: 36.86 / MAX: 81.73 MIN: 37.33 / MAX: 83.74 MIN: 36.87 / MAX: 85.77 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: SqueezeNetV1.0 r1 r2 r3 3 6 9 12 15 SE +/- 0.373, N = 10 SE +/- 0.316, N = 11 SE +/- 0.373, N = 10 8.899 8.982 8.944 MIN: 4.96 / MAX: 31.21 MIN: 5.05 / MAX: 31.35 MIN: 5.01 / MAX: 31.89 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
DDraceNetwork This is a test of DDraceNetwork, an open-source cooperative platformer. OpenGL 3.3 is used for rendering, with fallbacks for older OpenGL versions. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better DDraceNetwork 15.2.3 Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.0 - Zoom: Default - Demo: RaiNyMore2 r1 r2 r3 40 80 120 160 200 SE +/- 9.09, N = 15 SE +/- 9.59, N = 15 SE +/- 11.09, N = 15 170.36 169.30 151.49 MIN: 2.43 / MAX: 499.5 MIN: 2.38 / MAX: 499.5 MIN: 2.37 / MAX: 499.75 1. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.0 Preset: Exhaustive r1 r2 r3 100 200 300 400 500 SE +/- 0.52, N = 3 SE +/- 0.81, N = 3 SE +/- 0.54, N = 3 447.99 449.37 449.90 1. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
LeelaChessZero LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.26 Backend: OpenCL r1 r2 r3 3K 6K 9K 12K 15K SE +/- 160.45, N = 3 SE +/- 176.76, N = 3 SE +/- 44.68, N = 3 13277 13173 13416 1. (CXX) g++ options: -flto -pthread
BRL-CAD BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.30.8 VGR Performance Metric r1 r2 r3 14K 28K 42K 56K 70K 63909 63822 64033 1. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm
VkFFT VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Benchmark Score, More Is Better VkFFT 1.1.1 r1 r2 r3 6K 12K 18K 24K 30K SE +/- 62.93, N = 3 SE +/- 58.68, N = 3 SE +/- 108.37, N = 3 25820 25647 25683 1. (CXX) g++ options: -O3 -pthread
DDraceNetwork This is a test of DDraceNetwork, an open-source cooperative platformer. OpenGL 3.3 is used for rendering, with fallbacks for older OpenGL versions. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better DDraceNetwork 15.2.3 Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: RaiNyMore2 r1 r2 r3 30 60 90 120 150 SE +/- 13.14, N = 12 SE +/- 9.86, N = 15 158.21 100.58 130.66 MIN: 7.02 / MAX: 449.03 MIN: 6.72 / MAX: 493.34 MIN: 6.67 / MAX: 498.75 1. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2020.3 Water Benchmark r1 r2 r3 0.1388 0.2776 0.4164 0.5552 0.694 SE +/- 0.003, N = 3 SE +/- 0.004, N = 3 SE +/- 0.002, N = 3 0.617 0.610 0.614 1. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
Stockfish This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 12 Total Time r1 r2 r3 2M 4M 6M 8M 10M SE +/- 85083.98, N = 8 SE +/- 85742.14, N = 3 SE +/- 67987.28, N = 12 9703133 9839292 9629353 1. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
Unigine Heaven This test calculates the average frame-rate within the Heaven demo for the Unigine engine. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Unigine Heaven 4.0 Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL r1 r2 r3 30 60 90 120 150 SE +/- 0.71, N = 3 SE +/- 0.96, N = 3 SE +/- 0.56, N = 3 139.13 139.91 139.18
Blender Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: Classroom - Compute: CUDA r1 r2 r3 60 120 180 240 300 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 250.78 251.90 251.80
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.1 Video Input: Chimera 1080p 10-bit r1 r2 r3 20 40 60 80 100 SE +/- 0.99, N = 4 SE +/- 1.05, N = 4 SE +/- 1.03, N = 4 86.08 85.83 85.95 MIN: 54.34 / MAX: 256.39 MIN: 54.27 / MAX: 257.58 MIN: 54.21 / MAX: 255.72 1. (CC) gcc options: -pthread
Build2 This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.13 Time To Compile r1 r2 r3 50 100 150 200 250 SE +/- 0.40, N = 3 SE +/- 0.49, N = 3 SE +/- 0.85, N = 3 210.05 210.71 210.95
Blender Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: Pabellon Barcelona - Compute: NVIDIA OptiX r1 r2 r3 40 80 120 160 200 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 SE +/- 0.08, N = 3 196.21 196.28 196.41
High Performance Conjugate Gradient HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better High Performance Conjugate Gradient 3.1 r1 r2 r3 0.8914 1.7828 2.6742 3.5656 4.457 SE +/- 0.00082, N = 3 SE +/- 0.00692, N = 3 SE +/- 0.01196, N = 3 3.96177 3.96068 3.95457 1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
Unigine Superposition This test calculates the average frame-rate within the Superposition demo for the Unigine engine, released in 2017. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Unigine Superposition 1.0 Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Ultra - Renderer: OpenGL r1 r2 r3 6 12 18 24 30 SE +/- 0.06, N = 3 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 25.1 25.4 25.3 MAX: 29.3 MAX: 29.4 MAX: 29.7
OpenBenchmarking.org Frames Per Second, More Is Better Unigine Superposition 1.0 Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: High - Renderer: OpenGL r1 r2 r3 15 30 45 60 75 SE +/- 0.19, N = 3 SE +/- 0.12, N = 3 SE +/- 0.09, N = 3 65.9 66.5 66.2 MAX: 81.6 MAX: 80.8 MAX: 80.3
OpenBenchmarking.org Frames Per Second, More Is Better Unigine Superposition 1.0 Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Medium - Renderer: OpenGL r1 r2 r3 20 40 60 80 100 SE +/- 0.15, N = 3 SE +/- 0.15, N = 3 SE +/- 0.15, N = 3 90.4 90.6 90.5 MAX: 114.5 MAX: 114.4 MAX: 113
OpenBenchmarking.org Frames Per Second, More Is Better Unigine Superposition 1.0 Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Low - Renderer: OpenGL r1 r2 r3 40 80 120 160 200 SE +/- 0.23, N = 3 SE +/- 0.71, N = 3 SE +/- 0.52, N = 3 177.7 178.1 177.4 MAX: 260.1 MAX: 259.4 MAX: 263.9
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Swirl r1 r2 r3 50 100 150 200 250 SE +/- 1.72, N = 8 SE +/- 1.60, N = 10 SE +/- 1.72, N = 8 207 207 207 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
CLOMP CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Speedup, More Is Better CLOMP 1.2 Static OMP Speedup r1 r2 r3 0.8325 1.665 2.4975 3.33 4.1625 SE +/- 0.03, N = 3 SE +/- 0.03, N = 15 SE +/- 0.03, N = 15 3.7 2.5 3.6 1. (CC) gcc options: -fopenmp -O3 -lm
Blender Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: Fishy Cat - Compute: CUDA r1 r2 r3 40 80 120 160 200 SE +/- 0.10, N = 3 SE +/- 0.11, N = 3 SE +/- 0.05, N = 3 168.87 167.96 168.08
LuxCoreRender OpenCL LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on OpenCL accelerators/GPUs. The alternative luxcorerender test profile is for CPU execution due to a difference in tests, etc. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender OpenCL 2.3 Scene: LuxCore Benchmark r1 r2 r3 0.5198 1.0396 1.5594 2.0792 2.599 SE +/- 0.04, N = 12 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 2.26 2.31 2.29 MIN: 0.14 / MAX: 2.63 MIN: 0.27 / MAX: 2.63 MIN: 0.27 / MAX: 2.64
LuxCoreRender OpenCL LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on OpenCL accelerators/GPUs. The alternative luxcorerender test profile is for CPU execution due to a difference in tests, etc. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender OpenCL 2.3 Scene: Food r1 r2 r3 0.297 0.594 0.891 1.188 1.485 SE +/- 0.04, N = 12 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 1.27 1.32 1.30 MIN: 0.13 / MAX: 1.57 MIN: 0.29 / MAX: 1.57 MIN: 0.26 / MAX: 1.57
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2021.1 Model: Person Detection 0106 FP32 - Device: CPU r1 r2 r3 1100 2200 3300 4400 5500 SE +/- 15.43, N = 3 SE +/- 9.68, N = 9 SE +/- 14.45, N = 5 5069.44 5079.89 5073.09 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org FPS, More Is Better OpenVINO 2021.1 Model: Person Detection 0106 FP32 - Device: CPU r1 r2 r3 0.1778 0.3556 0.5334 0.7112 0.889 SE +/- 0.01, N = 3 SE +/- 0.01, N = 9 SE +/- 0.01, N = 5 0.79 0.79 0.79 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.1 Video Input: Chimera 1080p r1 r2 r3 110 220 330 440 550 SE +/- 5.73, N = 14 SE +/- 3.02, N = 14 SE +/- 3.24, N = 13 489.84 486.46 487.57 MIN: 317.1 / MAX: 898.12 MIN: 316.37 / MAX: 900.57 MIN: 316.7 / MAX: 911.47 1. (CC) gcc options: -pthread
LuxCoreRender OpenCL LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on OpenCL accelerators/GPUs. The alternative luxcorerender test profile is for CPU execution due to a difference in tests, etc. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender OpenCL 2.3 Scene: DLSC r1 r2 r3 0.6233 1.2466 1.8699 2.4932 3.1165 SE +/- 0.06, N = 12 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 2.70 2.77 2.76 MIN: 0.69 / MAX: 2.81 MIN: 2.57 / MAX: 2.84 MIN: 2.56 / MAX: 2.84
Blender Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: Classroom - Compute: NVIDIA OptiX r1 r2 r3 30 60 90 120 150 SE +/- 0.13, N = 3 SE +/- 0.23, N = 3 SE +/- 0.13, N = 3 116.76 116.15 116.26
Basis Universal Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.12 Settings: UASTC Level 3 r1 r2 r3 20 40 60 80 100 SE +/- 0.55, N = 3 SE +/- 0.55, N = 3 SE +/- 0.53, N = 3 110.84 110.93 111.04 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
RealSR-NCNN RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better RealSR-NCNN 20200818 Scale: 4x - TAA: Yes r1 r2 r3 20 40 60 80 100 SE +/- 0.31, N = 3 SE +/- 0.48, N = 3 SE +/- 0.35, N = 3 99.81 100.62 100.75
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU r1 r2 r3 1500 3000 4500 6000 7500 SE +/- 2.95, N = 3 SE +/- 4.70, N = 3 SE +/- 6.73, N = 3 7140.50 7159.42 7151.58 MIN: 7021.68 MIN: 7041.4 MIN: 7027.2 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU r1 r2 r3 1500 3000 4500 6000 7500 SE +/- 12.55, N = 3 SE +/- 1.75, N = 3 SE +/- 6.55, N = 3 7155.41 7159.48 7169.03 MIN: 7025.22 MIN: 7040.61 MIN: 7046.49 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU r1 r2 r3 1500 3000 4500 6000 7500 SE +/- 3.89, N = 3 SE +/- 0.92, N = 3 SE +/- 2.23, N = 3 7144.23 7154.66 7147.09 MIN: 7028.46 MIN: 7035.88 MIN: 7033.98 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
Blender Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: BMW27 - Compute: NVIDIA OptiX r1 r2 r3 9 18 27 36 45 SE +/- 3.33, N = 15 SE +/- 0.02, N = 3 SE +/- 0.05, N = 3 41.47 38.07 38.07
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: BMW27 - Compute: CUDA r1 r2 r3 20 40 60 80 100 SE +/- 0.14, N = 3 SE +/- 0.16, N = 3 SE +/- 0.10, N = 3 91.00 90.82 90.93
GEGL GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GEGL Operation: Cartoon r1 r2 r3 20 40 60 80 100 SE +/- 0.12, N = 3 SE +/- 0.19, N = 3 SE +/- 0.09, N = 3 86.79 87.32 86.99
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU r1 r2 r3 0.2678 0.5356 0.8034 1.0712 1.339 SE +/- 0.00, N = 3 SE +/- 0.00, N = 4 SE +/- 0.00, N = 6 1.17 1.19 1.19 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org FPS, More Is Better OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU r1 r2 r3 700 1400 2100 2800 3500 SE +/- 33.67, N = 3 SE +/- 38.35, N = 4 SE +/- 34.05, N = 6 3442.78 3403.45 3405.92 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU r1 r2 r3 800 1600 2400 3200 4000 SE +/- 2.45, N = 3 SE +/- 2.65, N = 3 SE +/- 3.77, N = 3 3795.02 3797.05 3797.72 MIN: 3682.24 MIN: 3673.18 MIN: 3684.19 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU r1 r2 r3 800 1600 2400 3200 4000 SE +/- 6.76, N = 3 SE +/- 4.34, N = 3 SE +/- 3.22, N = 3 3795.81 3800.41 3798.12 MIN: 3687.23 MIN: 3681.23 MIN: 3685.27 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU r1 r2 r3 800 1600 2400 3200 4000 SE +/- 1.61, N = 3 SE +/- 1.20, N = 3 SE +/- 1.33, N = 3 3797.32 3799.45 3792.87 MIN: 3686.53 MIN: 3692.97 MIN: 3672.83 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: regnety_400m r1 r2 r3 5 10 15 20 25 SE +/- 0.06, N = 3 SE +/- 0.24, N = 3 SE +/- 0.10, N = 3 19.16 18.91 19.38 MIN: 18.07 / MAX: 22.36 MIN: 13.5 / MAX: 30.63 MIN: 14.45 / MAX: 42.2 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: squeezenet_ssd r1 r2 r3 7 14 21 28 35 SE +/- 0.14, N = 3 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 27.64 27.51 27.63 MIN: 27 / MAX: 40.14 MIN: 26.93 / MAX: 43.6 MIN: 27.02 / MAX: 46.56 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: yolov4-tiny r1 r2 r3 8 16 24 32 40 SE +/- 0.48, N = 3 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 35.95 35.59 35.66 MIN: 34.4 / MAX: 55.63 MIN: 34.42 / MAX: 51.24 MIN: 34.45 / MAX: 49.15 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet50 r1 r2 r3 9 18 27 36 45 SE +/- 0.51, N = 3 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 37.81 37.30 37.22 MIN: 34.04 / MAX: 52.8 MIN: 33.91 / MAX: 56.28 MIN: 33.9 / MAX: 52.84 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: alexnet r1 r2 r3 4 8 12 16 20 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 SE +/- 0.03, N = 3 15.50 15.46 15.49 MIN: 14.41 / MAX: 55.15 MIN: 14.35 / MAX: 27.24 MIN: 14.41 / MAX: 24.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet18 r1 r2 r3 5 10 15 20 25 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 18.62 18.71 18.66 MIN: 17.08 / MAX: 32.57 MIN: 17.06 / MAX: 33.58 MIN: 17.05 / MAX: 30.94 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: vgg16 r1 r2 r3 16 32 48 64 80 SE +/- 0.20, N = 3 SE +/- 0.03, N = 3 SE +/- 0.12, N = 3 72.09 71.91 71.86 MIN: 70.5 / MAX: 88.28 MIN: 70.43 / MAX: 92.47 MIN: 70.48 / MAX: 88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: googlenet r1 r2 r3 5 10 15 20 25 SE +/- 0.01, N = 3 SE +/- 0.08, N = 3 SE +/- 0.06, N = 3 19.98 20.01 20.21 MIN: 18.95 / MAX: 23.24 MIN: 18.96 / MAX: 24.67 MIN: 19.11 / MAX: 32.7 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: blazeface r1 r2 r3 0.585 1.17 1.755 2.34 2.925 SE +/- 0.00, N = 3 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 2.54 2.60 2.57 MIN: 2.35 / MAX: 2.74 MIN: 2.45 / MAX: 10.37 MIN: 2.45 / MAX: 2.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: efficientnet-b0 r1 r2 r3 3 6 9 12 15 SE +/- 0.05, N = 3 SE +/- 0.96, N = 3 SE +/- 0.96, N = 3 10.00 9.05 9.06 MIN: 9.46 / MAX: 24.32 MIN: 6.99 / MAX: 21.76 MIN: 7.04 / MAX: 12.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mnasnet r1 r2 r3 2 4 6 8 10 SE +/- 0.02, N = 3 SE +/- 0.75, N = 3 SE +/- 0.74, N = 3 6.67 5.96 5.96 MIN: 5.99 / MAX: 21.18 MIN: 4.32 / MAX: 14.32 MIN: 4.33 / MAX: 28.21 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: shufflenet-v2 r1 r2 r3 2 4 6 8 10 SE +/- 0.03, N = 3 SE +/- 0.94, N = 3 SE +/- 0.95, N = 3 7.93 6.95 7.03 MIN: 7.52 / MAX: 16.61 MIN: 5.01 / MAX: 9.68 MIN: 5.04 / MAX: 20.64 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v3-v3 - Model: mobilenet-v3 r1 r2 r3 1.3073 2.6146 3.9219 5.2292 6.5365 SE +/- 0.65, N = 3 SE +/- 0.65, N = 3 SE +/- 0.62, N = 3 5.74 5.81 5.81 MIN: 4.3 / MAX: 7.75 MIN: 4.43 / MAX: 17.76 MIN: 4.48 / MAX: 10.59 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v2-v2 - Model: mobilenet-v2 r1 r2 r3 2 4 6 8 10 SE +/- 0.67, N = 3 SE +/- 0.73, N = 3 SE +/- 0.73, N = 3 7.31 7.22 7.23 MIN: 5.51 / MAX: 16.43 MIN: 5.54 / MAX: 12.03 MIN: 5.55 / MAX: 12.3 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mobilenet r1 r2 r3 6 12 18 24 30 SE +/- 0.17, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 26.62 26.63 26.53 MIN: 25.69 / MAX: 38.05 MIN: 25.7 / MAX: 41.21 MIN: 25.78 / MAX: 41.25 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: regnety_400m r1 r2 r3 5 10 15 20 25 SE +/- 0.09, N = 3 SE +/- 1.83, N = 3 SE +/- 1.77, N = 3 19.16 17.15 17.60 MIN: 17.94 / MAX: 21.24 MIN: 13.3 / MAX: 38.12 MIN: 13.79 / MAX: 32.97 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: squeezenet_ssd r1 r2 r3 6 12 18 24 30 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 27.58 27.52 27.55 MIN: 26.94 / MAX: 43.23 MIN: 26.95 / MAX: 42.6 MIN: 26.92 / MAX: 41.99 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: yolov4-tiny r1 r2 r3 8 16 24 32 40 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 35.52 35.51 35.53 MIN: 34.38 / MAX: 51.44 MIN: 33.05 / MAX: 50.05 MIN: 32.99 / MAX: 52.01 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: resnet50 r1 r2 r3 9 18 27 36 45 SE +/- 0.03, N = 3 SE +/- 0.06, N = 3 SE +/- 0.01, N = 3 37.25 37.34 37.26 MIN: 34.07 / MAX: 48.19 MIN: 33.97 / MAX: 56.32 MIN: 33.79 / MAX: 52.48 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: alexnet r1 r2 r3 4 8 12 16 20 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 15.44 15.53 15.50 MIN: 14.41 / MAX: 26.42 MIN: 14.41 / MAX: 25.62 MIN: 14.41 / MAX: 26.23 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: resnet18 r1 r2 r3 5 10 15 20 25 SE +/- 0.00, N = 3 SE +/- 0.34, N = 3 SE +/- 0.27, N = 3 18.62 18.33 18.38 MIN: 17.13 / MAX: 20.97 MIN: 14.43 / MAX: 32.39 MIN: 14.4 / MAX: 32.57 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: vgg16 r1 r2 r3 16 32 48 64 80 SE +/- 0.05, N = 3 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 71.96 71.82 71.86 MIN: 70.52 / MAX: 88.3 MIN: 70.37 / MAX: 86.67 MIN: 70.4 / MAX: 88.5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: googlenet r1 r2 r3 5 10 15 20 25 SE +/- 0.06, N = 3 SE +/- 1.77, N = 3 SE +/- 1.84, N = 3 20.05 18.20 18.26 MIN: 18.94 / MAX: 32.96 MIN: 14.26 / MAX: 31.74 MIN: 14.28 / MAX: 36.09 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: blazeface r1 r2 r3 0.5738 1.1476 1.7214 2.2952 2.869 SE +/- 0.02, N = 3 SE +/- 0.26, N = 3 SE +/- 0.25, N = 3 2.55 2.29 2.29 MIN: 2.43 / MAX: 2.76 MIN: 1.68 / MAX: 8.91 MIN: 1.69 / MAX: 12.73 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: efficientnet-b0 r1 r2 r3 3 6 9 12 15 SE +/- 0.10, N = 3 SE +/- 0.95, N = 3 SE +/- 0.94, N = 3 10.01 9.02 8.99 MIN: 9.44 / MAX: 29.57 MIN: 7 / MAX: 19.29 MIN: 6.99 / MAX: 13.79 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: mnasnet r1 r2 r3 2 4 6 8 10 SE +/- 0.00, N = 3 SE +/- 0.71, N = 3 SE +/- 0.76, N = 3 6.63 5.86 5.91 MIN: 6.21 / MAX: 8.85 MIN: 4.3 / MAX: 15.47 MIN: 4.32 / MAX: 7.94 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: shufflenet-v2 r1 r2 r3 2 4 6 8 10 SE +/- 0.07, N = 3 SE +/- 0.96, N = 3 SE +/- 0.93, N = 3 7.92 6.98 7.05 MIN: 7.27 / MAX: 20.3 MIN: 4.98 / MAX: 27.09 MIN: 5.04 / MAX: 20.37 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 r1 r2 r3 1.3073 2.6146 3.9219 5.2292 6.5365 SE +/- 0.62, N = 3 SE +/- 0.65, N = 3 SE +/- 0.64, N = 3 5.74 5.73 5.81 MIN: 4.43 / MAX: 9.64 MIN: 4.33 / MAX: 10.47 MIN: 4.41 / MAX: 25.12 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 r1 r2 r3 2 4 6 8 10 SE +/- 0.74, N = 3 SE +/- 0.79, N = 3 SE +/- 0.73, N = 3 7.23 7.22 7.19 MIN: 5.54 / MAX: 9.59 MIN: 5.41 / MAX: 20.72 MIN: 5.52 / MAX: 9.67 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: mobilenet r1 r2 r3 6 12 18 24 30 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 SE +/- 0.07, N = 3 26.52 26.53 26.51 MIN: 25.69 / MAX: 43.81 MIN: 25.76 / MAX: 43.91 MIN: 25.69 / MAX: 45.35 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU r1 r2 r3 0.2768 0.5536 0.8304 1.1072 1.384 SE +/- 0.00, N = 3 SE +/- 0.00, N = 5 SE +/- 0.00, N = 4 1.21 1.23 1.22 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org FPS, More Is Better OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU r1 r2 r3 700 1400 2100 2800 3500 SE +/- 35.01, N = 3 SE +/- 33.23, N = 5 SE +/- 40.89, N = 4 3363.55 3307.53 3347.93 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2021.1 Model: Face Detection 0106 FP32 - Device: CPU r1 r2 r3 700 1400 2100 2800 3500 SE +/- 2.58, N = 3 SE +/- 1.22, N = 4 SE +/- 2.51, N = 3 3202.53 3207.35 3212.10 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org FPS, More Is Better OpenVINO 2021.1 Model: Face Detection 0106 FP32 - Device: CPU r1 r2 r3 0.2858 0.5716 0.8574 1.1432 1.429 SE +/- 0.01, N = 3 SE +/- 0.02, N = 4 SE +/- 0.02, N = 3 1.26 1.27 1.27 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.1 Video Input: Summer Nature 4K r1 r2 r3 30 60 90 120 150 SE +/- 1.06, N = 6 SE +/- 1.08, N = 6 SE +/- 1.07, N = 6 112.75 112.03 112.65 MIN: 99.69 / MAX: 158.99 MIN: 99.17 / MAX: 157.08 MIN: 99.62 / MAX: 158.58 1. (CC) gcc options: -pthread
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2021.1 Model: Person Detection 0106 FP16 - Device: CPU r1 r2 r3 1100 2200 3300 4400 5500 SE +/- 4.97, N = 3 SE +/- 19.24, N = 3 SE +/- 4.20, N = 3 4961.99 4978.25 5006.34 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org FPS, More Is Better OpenVINO 2021.1 Model: Person Detection 0106 FP16 - Device: CPU r1 r2 r3 0.18 0.36 0.54 0.72 0.9 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 0.80 0.80 0.80 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 9 - Compression Speed r1 r2 r3 13 26 39 52 65 SE +/- 0.59, N = 5 SE +/- 0.36, N = 3 SE +/- 0.66, N = 3 55.72 56.07 57.01 1. (CC) gcc options: -O3
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2021.1 Model: Face Detection 0106 FP16 - Device: CPU r1 r2 r3 700 1400 2100 2800 3500 SE +/- 4.35, N = 3 SE +/- 3.88, N = 3 SE +/- 7.78, N = 3 3165.24 3166.57 3164.51 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org FPS, More Is Better OpenVINO 2021.1 Model: Face Detection 0106 FP16 - Device: CPU r1 r2 r3 0.288 0.576 0.864 1.152 1.44 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 1.28 1.28 1.28 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.9.0 Binary: Pathtracer ISPC - Model: Asian Dragon r1 r2 r3 3 6 9 12 15 SE +/- 0.0822, N = 3 SE +/- 0.0236, N = 3 SE +/- 0.1308, N = 3 9.1343 9.2596 9.1967 MIN: 8.81 / MAX: 15.06 MIN: 8.82 / MAX: 14.99 MIN: 8.85 / MAX: 15
OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 3 - Compression Speed r1 r2 r3 13 26 39 52 65 SE +/- 0.61, N = 5 SE +/- 0.58, N = 3 SE +/- 0.48, N = 3 57.88 57.36 58.89 1. (CC) gcc options: -O3
Node.js V8 Web Tooling Benchmark Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org runs/s, More Is Better Node.js V8 Web Tooling Benchmark r1 r2 r3 3 6 9 12 15 SE +/- 0.14, N = 3 SE +/- 0.11, N = 3 SE +/- 0.11, N = 3 13.06 13.17 13.18 1. Nodejs
v10.19.0
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: Kostya r1 r2 r3 0.171 0.342 0.513 0.684 0.855 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.76 0.75 0.75 1. (CXX) g++ options: -O3 -pthread
DDraceNetwork OpenBenchmarking.org Milliseconds, Fewer Is Better DDraceNetwork 15.2.3 Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.0 - Zoom: Default - Demo: Multeasymap - Total Frame Time r1 r2 r3 3 6 9 12 15 Min: 2 / Avg: 2.43 / Max: 6.55 Min: 2 / Avg: 2.46 / Max: 6.5 Min: 2 / Avg: 2.39 / Max: 7.28 1. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.org Frames Per Second, More Is Better DDraceNetwork 15.2.3 Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.0 - Zoom: Default - Demo: Multeasymap r1 r2 r3 90 180 270 360 450 SE +/- 0.79, N = 3 SE +/- 2.87, N = 3 SE +/- 4.35, N = 3 413.88 412.43 412.38 MIN: 119.86 / MAX: 499.75 MIN: 103.17 / MAX: 499.75 MIN: 127.91 / MAX: 499.75 1. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
LevelDB LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Seek Random r1 r2 r3 3 6 9 12 15 SE +/- 0.11, N = 15 SE +/- 0.10, N = 15 SE +/- 0.11, N = 14 12.69 12.63 12.64 1. (CXX) g++ options: -O3 -lsnappy -lpthread
Blender Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: Fishy Cat - Compute: NVIDIA OptiX r1 r2 r3 14 28 42 56 70 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 SE +/- 0.12, N = 3 60.35 60.18 60.25
OpenBenchmarking.org M samples/s, More Is Better IndigoBench 4.4 Acceleration: CPU - Scene: Supercar r1 r2 r3 0.4851 0.9702 1.4553 1.9404 2.4255 SE +/- 0.002, N = 3 SE +/- 0.001, N = 3 SE +/- 0.002, N = 3 2.147 2.150 2.156
DDraceNetwork OpenBenchmarking.org Milliseconds, Fewer Is Better DDraceNetwork 15.2.3 Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: Multeasymap - Total Frame Time r1 r2 r3 3 6 9 12 15 Min: 2 / Avg: 2.3 / Max: 10.06 Min: 2 / Avg: 2.32 / Max: 5.18 Min: 2 / Avg: 2.32 / Max: 8.68 1. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.org Frames Per Second, More Is Better DDraceNetwork 15.2.3 Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: Multeasymap r1 r2 r3 90 180 270 360 450 SE +/- 0.25, N = 3 SE +/- 2.73, N = 3 SE +/- 2.45, N = 3 435.20 429.37 434.24 MIN: 99.45 / MAX: 499.75 MIN: 112.88 / MAX: 499.75 MIN: 115.25 / MAX: 499.75 1. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Sharpen r1 r2 r3 16 32 48 64 80 SE +/- 0.33, N = 3 SE +/- 0.58, N = 3 SE +/- 0.67, N = 3 72 72 73 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Enhanced r1 r2 r3 30 60 90 120 150 SE +/- 0.67, N = 3 SE +/- 0.67, N = 3 SE +/- 0.67, N = 3 115 115 115 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Noise-Gaussian r1 r2 r3 30 60 90 120 150 SE +/- 1.33, N = 3 SE +/- 1.00, N = 3 SE +/- 1.20, N = 3 146 147 147 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Resizing r1 r2 r3 120 240 360 480 600 SE +/- 2.73, N = 3 SE +/- 5.00, N = 3 SE +/- 5.36, N = 3 552 551 551 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: HWB Color Space r1 r2 r3 200 400 600 800 1000 SE +/- 5.03, N = 3 SE +/- 5.70, N = 3 SE +/- 4.51, N = 3 775 774 776 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Rotate r1 r2 r3 200 400 600 800 1000 SE +/- 2.52, N = 3 SE +/- 3.18, N = 3 SE +/- 1.86, N = 3 902 875 900 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.0 Preset: Thorough r1 r2 r3 12 24 36 48 60 SE +/- 0.54, N = 3 SE +/- 0.54, N = 3 SE +/- 0.42, N = 3 54.29 54.38 54.65 1. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
Basis Universal Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.12 Settings: ETC1S r1 r2 r3 13 26 39 52 65 SE +/- 0.38, N = 3 SE +/- 0.15, N = 3 SE +/- 0.56, N = 3 57.82 58.06 58.06 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
GEGL GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GEGL Operation: Wavelet Blur r1 r2 r3 13 26 39 52 65 SE +/- 0.25, N = 3 SE +/- 0.39, N = 3 SE +/- 0.25, N = 3 57.99 57.95 57.84
DeepSpeech Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better DeepSpeech 0.6 Acceleration: CPU r1 r2 r3 20 40 60 80 100 SE +/- 0.21, N = 3 SE +/- 0.04, N = 3 SE +/- 0.04, N = 3 81.30 81.07 81.04
LuxCoreRender OpenCL LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on OpenCL accelerators/GPUs. The alternative luxcorerender test profile is for CPU execution due to a difference in tests, etc. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender OpenCL 2.3 Scene: Rainbow Colors and Prism r1 r2 r3 1.2173 2.4346 3.6519 4.8692 6.0865 SE +/- 0.12, N = 12 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 5.30 5.39 5.41 MIN: 1.66 / MAX: 5.7 MIN: 4.6 / MAX: 5.67 MIN: 4.58 / MAX: 5.7
Basis Universal Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.12 Settings: UASTC Level 2 r1 r2 r3 13 26 39 52 65 SE +/- 0.55, N = 3 SE +/- 0.41, N = 3 SE +/- 0.58, N = 3 55.50 55.74 55.77 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.0 Preset: Medium r1 r2 r3 2 4 6 8 10 SE +/- 0.14, N = 15 SE +/- 0.11, N = 15 SE +/- 0.16, N = 15 7.68 7.61 7.58 1. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
GEGL GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GEGL Operation: Color Enhance r1 r2 r3 12 24 36 48 60 SE +/- 0.22, N = 3 SE +/- 0.04, N = 3 SE +/- 0.28, N = 3 54.11 54.31 54.10
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: LargeRandom r1 r2 r3 0.1125 0.225 0.3375 0.45 0.5625 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.5 0.5 0.5 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: PartialTweets r1 r2 r3 0.1958 0.3916 0.5874 0.7832 0.979 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 0.86 0.87 0.86 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: DistinctUserID r1 r2 r3 0.2003 0.4006 0.6009 0.8012 1.0015 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.89 0.88 0.88 1. (CXX) g++ options: -O3 -pthread
LevelDB LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Random Read r1 r2 r3 3 6 9 12 15 SE +/- 0.250, N = 12 SE +/- 0.206, N = 15 SE +/- 0.214, N = 15 9.620 9.692 9.573 1. (CXX) g++ options: -O3 -lsnappy -lpthread
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.1 Video Input: Summer Nature 1080p r1 r2 r3 100 200 300 400 500 SE +/- 3.60, N = 14 SE +/- 3.46, N = 13 SE +/- 3.80, N = 13 460.02 459.61 459.71 MIN: 375.05 / MAX: 590.01 MIN: 374.03 / MAX: 582.97 MIN: 374.63 / MAX: 587.93 1. (CC) gcc options: -pthread
VkResample VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better VkResample 1.0 Upscale: 2x - Precision: Double r1 r2 r3 60 120 180 240 300 SE +/- 0.20, N = 3 SE +/- 0.11, N = 3 SE +/- 0.20, N = 3 256.87 257.06 257.62 1. (CXX) g++ options: -O3 -pthread
GEGL GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GEGL Operation: Rotate 90 Degrees r1 r2 r3 9 18 27 36 45 SE +/- 0.31, N = 3 SE +/- 0.36, N = 3 SE +/- 0.43, N = 3 37.70 37.54 37.69
OpenBenchmarking.org Seconds, Fewer Is Better GEGL Operation: Antialias r1 r2 r3 8 16 24 32 40 SE +/- 0.45, N = 3 SE +/- 0.35, N = 3 SE +/- 0.38, N = 3 36.56 36.56 36.65
eSpeak-NG Speech Engine This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better eSpeak-NG Speech Engine 20200907 Text-To-Speech Synthesis r1 r2 r3 7 14 21 28 35 SE +/- 0.29, N = 4 SE +/- 0.12, N = 4 SE +/- 0.04, N = 4 26.47 27.18 27.71 1. (CC) gcc options: -O2 -std=c99 -lpthread -lm
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Twofish-XTS 512b Decryption r1 r2 r3 110 220 330 440 550 SE +/- 0.10, N = 3 SE +/- 1.44, N = 3 SE +/- 2.34, N = 3 482.7 485.7 483.0
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Serpent-XTS 512b Decryption r1 r2 r3 200 400 600 800 1000 SE +/- 1.28, N = 3 SE +/- 1.17, N = 3 SE +/- 4.24, N = 3 871.7 878.1 873.5
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Serpent-XTS 512b Encryption r1 r2 r3 200 400 600 800 1000 SE +/- 0.83, N = 3 SE +/- 0.87, N = 3 SE +/- 4.25, N = 3 878.0 882.1 874.4
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup AES-XTS 512b Decryption r1 r2 r3 700 1400 2100 2800 3500 SE +/- 1.21, N = 3 SE +/- 10.03, N = 3 SE +/- 13.02, N = 3 3348.3 3388.5 3362.9
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup AES-XTS 512b Encryption r1 r2 r3 700 1400 2100 2800 3500 SE +/- 3.15, N = 3 SE +/- 15.69, N = 3 SE +/- 25.61, N = 3 3346.8 3381.9 3336.0
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Twofish-XTS 256b Decryption r1 r2 r3 110 220 330 440 550 SE +/- 0.34, N = 3 SE +/- 1.43, N = 3 SE +/- 2.21, N = 3 482.5 486.3 483.0
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Twofish-XTS 256b Encryption r1 r2 r3 110 220 330 440 550 SE +/- 0.75, N = 3 SE +/- 1.08, N = 3 SE +/- 2.51, N = 3 482.0 487.4 483.6
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Serpent-XTS 256b Decryption r1 r2 r3 200 400 600 800 1000 SE +/- 1.62, N = 3 SE +/- 1.50, N = 3 SE +/- 4.03, N = 3 872.3 876.6 870.9
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Serpent-XTS 256b Encryption r1 r2 r3 200 400 600 800 1000 SE +/- 0.92, N = 3 SE +/- 1.25, N = 3 SE +/- 2.67, N = 3 874.1 881.4 874.1
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup AES-XTS 256b Decryption r1 r2 r3 900 1800 2700 3600 4500 SE +/- 4.92, N = 3 SE +/- 17.20, N = 3 SE +/- 15.07, N = 3 4002.4 4055.1 4026.9
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup AES-XTS 256b Encryption r1 r2 r3 900 1800 2700 3600 4500 SE +/- 1.66, N = 3 SE +/- 25.91, N = 3 SE +/- 20.10, N = 3 4005.6 4080.5 4023.0
OpenBenchmarking.org Iterations Per Second, More Is Better Cryptsetup PBKDF2-whirlpool r1 r2 r3 200K 400K 600K 800K 1000K SE +/- 4903.32, N = 3 SE +/- 2314.28, N = 3 SE +/- 2497.33, N = 3 816282 830020 810352
OpenBenchmarking.org Iterations Per Second, More Is Better Cryptsetup PBKDF2-sha512 r1 r2 r3 400K 800K 1200K 1600K 2000K SE +/- 7117.07, N = 3 SE +/- 1201.00, N = 3 SE +/- 12877.64, N = 3 1919349 1943008 1886103
LevelDB LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Sequential Fill r1 r2 r3 11 22 33 44 55 SE +/- 0.54, N = 4 SE +/- 0.58, N = 4 SE +/- 0.48, N = 5 47.24 47.29 47.42 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org MB/s, More Is Better LevelDB 1.22 Benchmark: Sequential Fill r1 r2 r3 9 18 27 36 45 SE +/- 0.44, N = 4 SE +/- 0.46, N = 4 SE +/- 0.39, N = 5 37.5 37.4 37.3 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Random Delete r1 r2 r3 11 22 33 44 55 SE +/- 0.49, N = 5 SE +/- 0.57, N = 4 SE +/- 0.56, N = 4 47.23 47.30 47.39 1. (CXX) g++ options: -O3 -lsnappy -lpthread
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: MobileNet v2 r1 r2 r3 70 140 210 280 350 SE +/- 2.78, N = 8 SE +/- 0.81, N = 3 SE +/- 0.36, N = 3 321.42 295.55 299.40 MIN: 300.42 / MAX: 371.06 MIN: 292.39 / MAX: 306.56 MIN: 297.92 / MAX: 315.55 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
Darktable Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Darktable 3.0.1 Test: Masskrug - Acceleration: CPU-only r1 r2 r3 2 4 6 8 10 SE +/- 0.097, N = 12 SE +/- 0.096, N = 12 SE +/- 0.099, N = 12 7.128 7.150 7.155
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU r1 r2 r3 2 4 6 8 10 SE +/- 0.05152, N = 3 SE +/- 0.11582, N = 12 SE +/- 0.02993, N = 3 7.16575 7.04404 7.14574 MIN: 5.58 MIN: 4.11 MIN: 5.45 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU r1 r2 r3 0.715 1.43 2.145 2.86 3.575 SE +/- 0.01732, N = 3 SE +/- 0.02081, N = 3 SE +/- 0.06527, N = 12 3.17762 3.16769 3.11291 MIN: 2.58 MIN: 2.39 MIN: 1.86 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
GEGL GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GEGL Operation: Scale r1 r2 r3 2 4 6 8 10 SE +/- 0.055, N = 12 SE +/- 0.059, N = 13 SE +/- 0.056, N = 14 6.954 6.973 7.000
OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 1 - Compression Speed r1 r2 r3 2K 4K 6K 8K 10K SE +/- 6.52, N = 3 SE +/- 4.75, N = 3 SE +/- 11.24, N = 3 8120.67 8127.78 8079.18 1. (CC) gcc options: -O3
GEGL GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GEGL Operation: Reflect r1 r2 r3 7 14 21 28 35 SE +/- 0.29, N = 3 SE +/- 0.30, N = 3 SE +/- 0.22, N = 3 28.18 28.50 28.31
OpenBenchmarking.org Seconds, Fewer Is Better GEGL Operation: Tile Glass r1 r2 r3 7 14 21 28 35 SE +/- 0.36, N = 3 SE +/- 0.27, N = 3 SE +/- 0.39, N = 3 28.24 28.24 28.06
OpenBenchmarking.org Seconds, Fewer Is Better GEGL Operation: Crop r1 r2 r3 2 4 6 8 10 SE +/- 0.065, N = 11 SE +/- 0.073, N = 9 SE +/- 0.077, N = 8 8.900 8.839 8.826
NAMD CUDA NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. This version of the NAMD test profile uses CUDA GPU acceleration. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org days/ns, Fewer Is Better NAMD CUDA 2.14 ATPase Simulation - 327,506 Atoms r1 r2 r3 0.05 0.1 0.15 0.2 0.25 SE +/- 0.00131, N = 3 SE +/- 0.00245, N = 5 SE +/- 0.00272, N = 4 0.22103 0.22238 0.22171
PHPBench PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Score, More Is Better PHPBench 0.8.1 PHP Benchmark Suite r1 r2 r3 200K 400K 600K 800K 1000K SE +/- 4346.11, N = 3 SE +/- 2600.83, N = 3 SE +/- 587.84, N = 3 837911 832417 829705
RNNoise RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better RNNoise 2020-06-28 r1 r2 r3 5 10 15 20 25 SE +/- 0.06, N = 3 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 22.08 21.32 22.04 1. (CC) gcc options: -O2 -pedantic -fvisibility=hidden -lm
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU r1 r2 r3 3 6 9 12 15 SE +/- 0.04374, N = 3 SE +/- 0.01715, N = 3 SE +/- 0.11418, N = 3 8.96782 9.00692 9.06628 MIN: 8.14 MIN: 8.15 MIN: 8 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU r1 r2 r3 3 6 9 12 15 SE +/- 0.04555, N = 3 SE +/- 0.03928, N = 3 SE +/- 0.03582, N = 3 9.77594 9.76468 9.73732 MIN: 8.77 MIN: 8.72 MIN: 8.75 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
Inkscape Inkscape is an open-source vector graphics editor. This test profile times how long it takes to complete various operations by Inkscape. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Inkscape Operation: SVG Files To PNG r1 r2 r3 5 10 15 20 25 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 21.00 21.05 21.07 1. Inkscape 0.92.5 (2060ec1f9f, 2020-04-08)
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: SqueezeNet v1.1 r1 r2 r3 60 120 180 240 300 SE +/- 1.46, N = 3 SE +/- 0.11, N = 3 SE +/- 0.12, N = 3 272.91 264.95 272.68 MIN: 264.43 / MAX: 277.05 MIN: 264.07 / MAX: 268.01 MIN: 271.53 / MAX: 277.6 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
Darktable Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Darktable 3.0.1 Test: Boat - Acceleration: CPU-only r1 r2 r3 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 SE +/- 0.03, N = 3 15.91 15.87 15.86
RealSR-NCNN RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better RealSR-NCNN 20200818 Scale: 4x - TAA: No r1 r2 r3 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.09, N = 3 SE +/- 0.11, N = 3 14.73 14.66 14.69
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU r1 r2 r3 1.0682 2.1364 3.2046 4.2728 5.341 SE +/- 0.10403, N = 12 SE +/- 0.06823, N = 15 SE +/- 0.07477, N = 15 4.74772 4.71457 4.73728 MIN: 3.29 MIN: 3.29 MIN: 3.29 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU r1 r2 r3 3 6 9 12 15 SE +/- 0.23621, N = 12 SE +/- 0.15643, N = 15 SE +/- 0.22537, N = 12 9.87893 9.77701 9.81238 MIN: 6.66 MIN: 6.67 MIN: 6.65 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
yquake2 This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better yquake2 7.45 Renderer: OpenGL 1.x - Resolution: 1920 x 1080 r1 r2 r3 13 26 39 52 65 SE +/- 0.03, N = 3 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 59.9 59.9 59.9 1. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.org Frames Per Second, More Is Better yquake2 7.45 Renderer: OpenGL 3.x - Resolution: 1920 x 1080 r1 r2 r3 13 26 39 52 65 60 60 60 1. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
Opus Codec Encoding Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Opus Codec Encoding 1.3.1 WAV To Opus Encode r1 r2 r3 2 4 6 8 10 SE +/- 0.009, N = 5 SE +/- 0.004, N = 5 SE +/- 0.008, N = 5 7.624 7.602 7.616 1. (CXX) g++ options: -fvisibility=hidden -logg -lm
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU r1 r2 r3 0.9867 1.9734 2.9601 3.9468 4.9335 SE +/- 0.00310, N = 3 SE +/- 0.00806, N = 3 SE +/- 0.00559, N = 3 4.36381 4.37852 4.38535 MIN: 4.23 MIN: 4.25 MIN: 4.25 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU r1 r2 r3 1.0059 2.0118 3.0177 4.0236 5.0295 SE +/- 0.00967, N = 3 SE +/- 0.01661, N = 3 SE +/- 0.00726, N = 3 4.45564 4.47062 4.46656 MIN: 4.02 MIN: 4.02 MIN: 4.01 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
Betsy GPU Compressor Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Betsy GPU Compressor 1.1 Beta Codec: ETC1 - Quality: Highest r1 r2 r3 1.3172 2.6344 3.9516 5.2688 6.586 SE +/- 0.068, N = 12 SE +/- 0.008, N = 3 SE +/- 0.024, N = 3 5.854 5.789 5.792 1. (CXX) g++ options: -O3 -O2 -lpthread -ldl
yquake2 This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better yquake2 7.45 Renderer: Software CPU - Resolution: 1920 x 1080 r1 r2 r3 14 28 42 56 70 SE +/- 0.07, N = 3 SE +/- 0.07, N = 3 SE +/- 0.09, N = 3 60.7 60.7 60.6 1. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
VkResample VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better VkResample 1.0 Upscale: 2x - Precision: Single r1 r2 r3 6 12 18 24 30 SE +/- 0.02, N = 3 SE +/- 0.05, N = 3 SE +/- 0.08, N = 3 24.99 25.19 25.23 1. (CXX) g++ options: -O3 -pthread
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.0 Preset: Fast r1 r2 r3 1.2668 2.5336 3.8004 5.0672 6.334 SE +/- 0.05, N = 3 SE +/- 0.08, N = 3 SE +/- 0.04, N = 12 5.44 5.59 5.63 1. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: GET r1 r2 r3 700K 1400K 2100K 2800K 3500K SE +/- 41615.25, N = 3 SE +/- 13828.40, N = 3 SE +/- 8077.93, N = 3 3248596.08 3012560.83 3009326.75 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU r1 r2 r3 0.6248 1.2496 1.8744 2.4992 3.124 SE +/- 0.00400, N = 3 SE +/- 0.01530, N = 3 SE +/- 0.00352, N = 3 2.72558 2.77670 2.74874 MIN: 2.54 MIN: 2.56 MIN: 2.54 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU r1 r2 r3 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.04, N = 3 SE +/- 0.03, N = 3 12.47 12.44 12.61 MIN: 12.08 MIN: 12.09 MIN: 12.2 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
Basis Universal Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.12 Settings: UASTC Level 0 r1 r2 r3 2 4 6 8 10 SE +/- 0.079, N = 3 SE +/- 0.061, N = 3 SE +/- 0.095, N = 3 7.288 7.345 7.353 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
LevelDB LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Hot Read r1 r2 r3 2 4 6 8 10 SE +/- 0.013, N = 3 SE +/- 0.075, N = 3 SE +/- 0.049, N = 3 6.946 7.099 7.128 1. (CXX) g++ options: -O3 -lsnappy -lpthread
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenCL Particle Filter r1 r2 r3 2 4 6 8 10 SE +/- 0.065, N = 3 SE +/- 0.013, N = 3 SE +/- 0.016, N = 3 7.115 7.055 7.027 1. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.org H/s, More Is Better Hashcat 6.1.1 Benchmark: SHA1 r1 r2 r3 2000M 4000M 6000M 8000M 10000M SE +/- 31347213.24, N = 3 SE +/- 17380832.35, N = 3 SE +/- 18653000.95, N = 3 8585766667 8544500000 8535333333
OpenBenchmarking.org H/s, More Is Better Hashcat 6.1.1 Benchmark: MD5 r1 r2 r3 5000M 10000M 15000M 20000M 25000M SE +/- 110495102.96, N = 3 SE +/- 81107726.72, N = 3 SE +/- 49256167.13, N = 3 24334866667 24260200000 24196900000
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU r1 r2 r3 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.08, N = 3 SE +/- 0.02, N = 3 18.01 17.90 18.03 MIN: 17.22 MIN: 17.18 MIN: 17.24 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU r1 r2 r3 5 10 15 20 25 SE +/- 0.06, N = 3 SE +/- 0.06, N = 3 SE +/- 0.02, N = 3 21.69 21.70 21.62 MIN: 21.47 MIN: 21.48 MIN: 21.51 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org GB/s, More Is Better cl-mem 2017-01-13 Benchmark: Copy r1 r2 r3 50 100 150 200 250 SE +/- 0.22, N = 3 SE +/- 0.24, N = 3 SE +/- 0.27, N = 3 236.6 235.4 235.1 1. (CC) gcc options: -O2 -flto -lOpenCL
OpenBenchmarking.org GB/s, More Is Better cl-mem 2017-01-13 Benchmark: Read r1 r2 r3 70 140 210 280 350 SE +/- 0.18, N = 3 SE +/- 0.09, N = 3 SE +/- 0.03, N = 3 330.3 329.9 329.9 1. (CC) gcc options: -O2 -flto -lOpenCL
Waifu2x-NCNN Vulkan Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Waifu2x-NCNN Vulkan 20200818 Scale: 2x - Denoise: 3 - TAA: Yes r1 r2 r3 2 4 6 8 10 SE +/- 0.004, N = 3 SE +/- 0.007, N = 3 SE +/- 0.011, N = 3 6.020 6.102 6.093
Darktable Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Darktable 3.0.1 Test: Server Room - Acceleration: CPU-only r1 r2 r3 0.9407 1.8814 2.8221 3.7628 4.7035 SE +/- 0.010, N = 3 SE +/- 0.004, N = 3 SE +/- 0.006, N = 3 4.181 4.174 4.178
LevelDB LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Random Fill r1 r2 r3 9 18 27 36 45 SE +/- 0.19, N = 3 SE +/- 0.20, N = 3 SE +/- 0.07, N = 3 41.04 41.03 40.98 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org MB/s, More Is Better LevelDB 1.22 Benchmark: Random Fill r1 r2 r3 10 20 30 40 50 SE +/- 0.21, N = 3 SE +/- 0.19, N = 3 SE +/- 0.07, N = 3 43.1 43.1 43.2 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Overwrite r1 r2 r3 9 18 27 36 45 SE +/- 0.15, N = 3 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 40.93 40.96 40.76 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org MB/s, More Is Better LevelDB 1.22 Benchmark: Overwrite r1 r2 r3 10 20 30 40 50 SE +/- 0.15, N = 3 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 43.2 43.2 43.4 1. (CXX) g++ options: -O3 -lsnappy -lpthread
MandelGPU MandelGPU is an OpenCL benchmark and this test runs with the OpenCL rendering float4 kernel with a maximum of 4096 iterations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Samples/sec, More Is Better MandelGPU 1.3pts1 OpenCL Device: GPU r1 r2 r3 50M 100M 150M 200M 250M SE +/- 1032565.22, N = 3 SE +/- 157365.45, N = 3 SE +/- 1449538.54, N = 3 251986408.7 252826584.8 252822614.4 1. (CC) gcc options: -O3 -lm -ftree-vectorize -funroll-loops -lglut -lOpenCL -lGL
LevelDB LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Microseconds Per Op, Fewer Is Better LevelDB 1.22 Benchmark: Fill Sync r1 r2 r3 700 1400 2100 2800 3500 SE +/- 33.91, N = 3 SE +/- 25.98, N = 3 SE +/- 60.32, N = 3 3361.78 3424.92 3386.08 1. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.org MB/s, More Is Better LevelDB 1.22 Benchmark: Fill Sync r1 r2 r3 0.1125 0.225 0.3375 0.45 0.5625 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.5 0.5 0.5 1. (CXX) g++ options: -O3 -lsnappy -lpthread
ViennaCL ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile uses ViennaCL OpenCL support and runs the included computational benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPS, More Is Better ViennaCL 1.4.2 OpenCL LU Factorization r1 r2 r3 15 30 45 60 75 SE +/- 0.36, N = 3 SE +/- 0.08, N = 3 SE +/- 0.44, N = 3 68.29 64.23 65.92 1. (CXX) g++ options: -rdynamic -lOpenCL
Darktable Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Darktable 3.0.1 Test: Server Rack - Acceleration: CPU-only r1 r2 r3 0.0407 0.0814 0.1221 0.1628 0.2035 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.181 0.181 0.181
r1 Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe0 - Thermald 1.9.1OpenCL Notes: GPU Compute Cores: 3072Python Notes: Python 3.8.3Security Notes: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 4 January 2021 12:11 by user user.
r2 Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe0 - Thermald 1.9.1OpenCL Notes: GPU Compute Cores: 3072Python Notes: Python 3.8.3Security Notes: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 5 January 2021 07:42 by user user.
r3 Processor: Intel Core i9-10885H @ 5.30GHz (8 Cores / 16 Threads), Motherboard: HP 8736 (S91 Ver. 01.02.01 BIOS), Chipset: Intel Comet Lake PCH, Memory: 32GB, Disk: 2048GB KXG50PNV2T04 KIOXIA, Graphics: NVIDIA Quadro RTX 5000 with Max-Q Design 16GB (600/6000MHz), Audio: Intel Comet Lake PCH cAVS, Network: Intel Wi-Fi 6 AX201
OS: Ubuntu 20.04, Kernel: 5.6.0-1034-oem (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Display Driver: NVIDIA 450.80.02, OpenGL: 4.6.0, OpenCL: OpenCL 1.2 CUDA 11.0.228, Vulkan: 1.2.133, Compiler: GCC 9.3.0 + CUDA 10.1, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe0 - Thermald 1.9.1OpenCL Notes: GPU Compute Cores: 3072Python Notes: Python 3.8.3Security Notes: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 6 January 2021 03:38 by user user.