Intel Core i7-5600U testing with a LENOVO 20BSCTO1WW (N14ET49W 1.27 BIOS) and Intel HD 5500 3GB on Ubuntu 20.10 via the Phoronix Test Suite.
R1 Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq ondemand - CPU Microcode: 0x2f - Thermald 2.3Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
2 3 Processor: Intel Core i7-5600U @ 3.20GHz (2 Cores / 4 Threads), Motherboard: LENOVO 20BSCTO1WW (N14ET49W 1.27 BIOS), Chipset: Intel Broadwell-U-OPI, Memory: 8GB, Disk: 128GB SAMSUNG MZNTE128, Graphics: Intel HD 5500 3GB (950MHz), Audio: Intel Broadwell-U Audio, Network: Intel I218-LM + Intel 7265
OS: Ubuntu 20.10, Kernel: 5.9.1-050901-generic (x86_64), Desktop: GNOME Shell 3.38.1, Display Server: X Server 1.20.9, Display Driver: modesetting 1.20.9, OpenGL: 4.6 Mesa 21.0.0-devel (git-bd69765 2021-01-01 groovy-oibaf-ppa), OpenCL: OpenCL 3.0, Vulkan: 1.2.145, Compiler: GCC 10.2.0, File-System: ext4, Screen Resolution: 1920x1080
VkFFT VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Benchmark Score, More Is Better VkFFT 1.1.1 R1 3 2 200 400 600 800 1000 SE +/- 2.52, N = 3 SE +/- 1.45, N = 3 SE +/- 2.00, N = 3 1126 1124 1122 1. (CXX) g++ options: -O3 -pthread
Build2 This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.13 Time To Compile 3 2 R1 200 400 600 800 1000 SE +/- 0.82, N = 3 SE +/- 0.55, N = 3 SE +/- 0.32, N = 3 906.25 918.46 919.50
CLOMP CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Speedup, More Is Better CLOMP 1.2 Static OMP Speedup 3 R1 2 0.2925 0.585 0.8775 1.17 1.4625 SE +/- 0.00, N = 3 SE +/- 0.01, N = 12 SE +/- 0.00, N = 3 1.3 1.3 1.2 1. (CC) gcc options: -fopenmp -O3 -lm
BRL-CAD BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.30.8 VGR Performance Metric R1 2 3 3K 6K 9K 12K 15K 14580 14562 14504 1. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: regnety_400m 3 2 R1 7 14 21 28 35 SE +/- 0.01, N = 3 SE +/- 0.06, N = 3 SE +/- 0.04, N = 3 27.64 27.72 27.82 MIN: 27.49 / MAX: 29.98 MIN: 27.48 / MAX: 31 MIN: 27.57 / MAX: 37.34 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: squeezenet_ssd 2 3 R1 16 32 48 64 80 SE +/- 0.04, N = 3 SE +/- 0.06, N = 3 SE +/- 0.12, N = 3 71.03 71.26 71.33 MIN: 70.28 / MAX: 76.92 MIN: 70.52 / MAX: 78.34 MIN: 70.87 / MAX: 74.75 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: yolov4-tiny 3 2 R1 20 40 60 80 100 SE +/- 0.14, N = 3 SE +/- 0.14, N = 3 SE +/- 0.27, N = 3 78.88 79.04 79.44 MIN: 78.2 / MAX: 90.69 MIN: 78.34 / MAX: 85.24 MIN: 78.27 / MAX: 87.18 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet50 3 R1 2 20 40 60 80 100 SE +/- 0.06, N = 3 SE +/- 0.12, N = 3 SE +/- 0.12, N = 3 101.01 101.11 101.17 MIN: 100.5 / MAX: 114.21 MIN: 100.49 / MAX: 113.37 MIN: 100.65 / MAX: 111.1 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: alexnet 3 R1 2 9 18 27 36 45 SE +/- 0.09, N = 3 SE +/- 0.18, N = 3 SE +/- 0.15, N = 3 36.81 36.84 37.02 MIN: 34.96 / MAX: 79.9 MIN: 35.27 / MAX: 39.39 MIN: 35.32 / MAX: 103.85 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet18 R1 3 2 10 20 30 40 50 SE +/- 0.09, N = 3 SE +/- 0.03, N = 3 SE +/- 0.13, N = 3 45.51 45.51 45.65 MIN: 45.16 / MAX: 47.35 MIN: 45.23 / MAX: 48.14 MIN: 45.26 / MAX: 56.06 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: vgg16 2 3 R1 30 60 90 120 150 SE +/- 0.24, N = 3 SE +/- 0.19, N = 3 SE +/- 0.18, N = 3 152.88 152.91 153.18 MIN: 151.28 / MAX: 160.12 MIN: 151.95 / MAX: 164.13 MIN: 152.21 / MAX: 161.01 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: googlenet 2 3 R1 11 22 33 44 55 SE +/- 0.03, N = 3 SE +/- 0.12, N = 3 SE +/- 0.09, N = 3 47.01 47.16 47.18 MIN: 46.74 / MAX: 49.94 MIN: 46.7 / MAX: 59.52 MIN: 46.85 / MAX: 56.58 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: blazeface 2 3 R1 1.269 2.538 3.807 5.076 6.345 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 5.62 5.63 5.64 MIN: 5.54 / MAX: 6.01 MIN: 5.5 / MAX: 5.84 MIN: 5.57 / MAX: 5.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: efficientnet-b0 2 3 R1 5 10 15 20 25 SE +/- 0.35, N = 3 SE +/- 0.39, N = 3 SE +/- 0.41, N = 3 22.11 22.30 22.50 MIN: 20.59 / MAX: 33.72 MIN: 21.02 / MAX: 25.56 MIN: 20.99 / MAX: 25.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mnasnet 2 3 R1 4 8 12 16 20 SE +/- 0.41, N = 3 SE +/- 0.40, N = 3 SE +/- 0.37, N = 3 14.05 14.16 14.26 MIN: 13.01 / MAX: 18.86 MIN: 13.06 / MAX: 28.01 MIN: 13.24 / MAX: 17.01 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: shufflenet-v2 3 2 R1 5 10 15 20 25 SE +/- 0.57, N = 3 SE +/- 0.63, N = 3 SE +/- 0.57, N = 3 21.30 21.34 21.40 MIN: 20.01 / MAX: 23.24 MIN: 20 / MAX: 24.44 MIN: 20.12 / MAX: 23.22 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v3-v3 - Model: mobilenet-v3 2 R1 3 3 6 9 12 15 SE +/- 0.13, N = 3 SE +/- 0.19, N = 3 SE +/- 0.19, N = 3 12.56 12.63 12.65 MIN: 12.21 / MAX: 15.08 MIN: 12.19 / MAX: 14.68 MIN: 12.18 / MAX: 14.68 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v2-v2 - Model: mobilenet-v2 2 3 R1 4 8 12 16 20 SE +/- 0.13, N = 3 SE +/- 0.18, N = 3 SE +/- 0.19, N = 3 14.44 14.53 14.55 MIN: 14.04 / MAX: 16.1 MIN: 13.97 / MAX: 26.42 MIN: 14.04 / MAX: 28.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mobilenet 2 R1 3 14 28 42 56 70 SE +/- 0.06, N = 3 SE +/- 0.08, N = 3 SE +/- 0.16, N = 3 61.38 61.49 61.61 MIN: 61.05 / MAX: 87.54 MIN: 61.05 / MAX: 63.53 MIN: 61.11 / MAX: 113.17 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: regnety_400m 2 3 R1 7 14 21 28 35 SE +/- 0.60, N = 3 SE +/- 0.53, N = 3 SE +/- 0.07, N = 3 27.13 27.25 27.81 MIN: 25.79 / MAX: 37.93 MIN: 26 / MAX: 29.97 MIN: 26.38 / MAX: 30.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: squeezenet_ssd 2 3 R1 16 32 48 64 80 SE +/- 0.10, N = 3 SE +/- 0.09, N = 3 SE +/- 0.10, N = 3 70.97 71.18 71.23 MIN: 70.58 / MAX: 80.98 MIN: 70.63 / MAX: 78.41 MIN: 70.8 / MAX: 121.35 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: yolov4-tiny 2 3 R1 20 40 60 80 100 SE +/- 0.06, N = 3 SE +/- 0.10, N = 3 SE +/- 0.03, N = 3 78.90 79.12 79.35 MIN: 78.21 / MAX: 91.76 MIN: 78.31 / MAX: 86.73 MIN: 78.46 / MAX: 91.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: resnet50 2 3 R1 20 40 60 80 100 SE +/- 0.21, N = 3 SE +/- 0.28, N = 3 SE +/- 0.16, N = 3 101.22 101.23 101.55 MIN: 100.01 / MAX: 110.27 MIN: 100.55 / MAX: 114.34 MIN: 100.72 / MAX: 112.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: alexnet 2 3 R1 9 18 27 36 45 SE +/- 0.23, N = 3 SE +/- 0.12, N = 3 SE +/- 0.27, N = 3 36.70 36.76 37.08 MIN: 35.28 / MAX: 38.63 MIN: 35.25 / MAX: 46.5 MIN: 34.94 / MAX: 39.37 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: resnet18 2 R1 3 10 20 30 40 50 SE +/- 0.05, N = 3 SE +/- 0.12, N = 3 SE +/- 0.29, N = 3 45.41 45.64 45.73 MIN: 45.11 / MAX: 48.07 MIN: 45.25 / MAX: 47.78 MIN: 45.08 / MAX: 48.32 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: vgg16 3 2 R1 30 60 90 120 150 SE +/- 0.31, N = 3 SE +/- 0.16, N = 3 SE +/- 0.07, N = 3 152.93 152.98 153.00 MIN: 151.38 / MAX: 166.09 MIN: 152.01 / MAX: 164.13 MIN: 151.95 / MAX: 164.37 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: googlenet 2 3 R1 11 22 33 44 55 SE +/- 0.81, N = 3 SE +/- 0.77, N = 3 SE +/- 0.16, N = 3 46.26 46.70 47.33 MIN: 43.6 / MAX: 59.76 MIN: 44.45 / MAX: 57.93 MIN: 46.92 / MAX: 49.74 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: blazeface 3 2 R1 1.2713 2.5426 3.8139 5.0852 6.3565 SE +/- 0.16, N = 3 SE +/- 0.15, N = 3 SE +/- 0.01, N = 3 5.47 5.48 5.65 MIN: 5.12 / MAX: 5.79 MIN: 5.14 / MAX: 5.72 MIN: 5.56 / MAX: 7.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: efficientnet-b0 2 3 R1 5 10 15 20 25 SE +/- 0.66, N = 3 SE +/- 0.59, N = 3 SE +/- 0.30, N = 3 21.97 22.01 22.46 MIN: 20.55 / MAX: 27.24 MIN: 20.61 / MAX: 33.2 MIN: 21.52 / MAX: 24.07 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: mnasnet 3 2 R1 4 8 12 16 20 SE +/- 0.45, N = 3 SE +/- 0.47, N = 3 SE +/- 0.26, N = 3 13.98 14.13 14.45 MIN: 13.03 / MAX: 14.74 MIN: 13.1 / MAX: 14.93 MIN: 13.46 / MAX: 16.94 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: shufflenet-v2 3 2 R1 5 10 15 20 25 SE +/- 0.57, N = 3 SE +/- 0.58, N = 3 SE +/- 0.37, N = 3 21.21 21.36 21.53 MIN: 19.96 / MAX: 25.67 MIN: 19.98 / MAX: 35.94 MIN: 20.55 / MAX: 22.52 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 2 3 R1 3 6 9 12 15 SE +/- 0.14, N = 3 SE +/- 0.14, N = 3 SE +/- 0.07, N = 3 12.57 12.61 12.69 MIN: 12.24 / MAX: 14.08 MIN: 12.23 / MAX: 13.54 MIN: 12.21 / MAX: 24.64 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 2 3 R1 4 8 12 16 20 SE +/- 0.15, N = 3 SE +/- 0.20, N = 3 SE +/- 0.14, N = 3 14.47 14.54 14.56 MIN: 14.06 / MAX: 28.34 MIN: 14.01 / MAX: 17.8 MIN: 14.07 / MAX: 17.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: Vulkan GPU - Model: mobilenet 2 3 R1 14 28 42 56 70 SE +/- 0.05, N = 3 SE +/- 0.02, N = 3 SE +/- 0.06, N = 3 61.41 61.47 61.55 MIN: 61.02 / MAX: 63.86 MIN: 61.15 / MAX: 64.55 MIN: 61.05 / MAX: 64.2 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU 3 R1 2 4K 8K 12K 16K 20K SE +/- 5.57, N = 3 SE +/- 16.92, N = 3 SE +/- 36.91, N = 3 20847.7 20853.0 20912.7 MIN: 20792.3 MIN: 20801.4 MIN: 20792.3 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU 2 3 R1 4K 8K 12K 16K 20K SE +/- 9.04, N = 3 SE +/- 3.46, N = 3 SE +/- 20.57, N = 3 20853.4 20858.0 20903.3 MIN: 20778.6 MIN: 20799.8 MIN: 20835.5 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU 3 R1 2 4K 8K 12K 16K 20K SE +/- 37.82, N = 3 SE +/- 29.30, N = 3 SE +/- 51.92, N = 3 20812.2 20830.5 20852.0 MIN: 20694.5 MIN: 20740.9 MIN: 20691.5 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org VKMark Score, More Is Better VKMark 2020-05-21 Resolution: 1280 x 1024 R1 3 2 150 300 450 600 750 SE +/- 0.67, N = 3 SE +/- 1.53, N = 3 SE +/- 2.19, N = 3 693 689 685 1. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.org VKMark Score, More Is Better VKMark 2020-05-21 Resolution: 800 x 600 2 R1 3 400 800 1200 1600 2000 SE +/- 0.88, N = 3 SE +/- 7.54, N = 3 SE +/- 10.20, N = 3 1691 1676 1670 1. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.org VKMark Score, More Is Better VKMark 2020-05-21 Resolution: 1024 x 768 2 R1 3 200 400 600 800 1000 SE +/- 6.11, N = 3 SE +/- 6.69, N = 3 SE +/- 4.18, N = 3 1097 1096 1092 1. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU 3 R1 2 2K 4K 6K 8K 10K SE +/- 44.20, N = 3 SE +/- 48.62, N = 3 SE +/- 15.69, N = 3 11243.5 11261.4 11332.2 MIN: 11158.2 MIN: 11146.6 MIN: 11227.9 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU 3 R1 2 2K 4K 6K 8K 10K SE +/- 13.64, N = 3 SE +/- 65.97, N = 3 SE +/- 44.81, N = 3 11214.9 11309.1 11320.1 MIN: 11171.5 MIN: 11154.3 MIN: 11234.3 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU 3 R1 2 2K 4K 6K 8K 10K SE +/- 18.25, N = 3 SE +/- 25.05, N = 3 SE +/- 15.94, N = 3 11200.4 11297.4 11316.9 MIN: 11129.9 MIN: 11225.5 MIN: 11169.5 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
Node.js V8 Web Tooling Benchmark Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org runs/s, More Is Better Node.js V8 Web Tooling Benchmark 3 2 R1 2 4 6 8 10 SE +/- 0.02, N = 3 SE +/- 0.07, N = 3 SE +/- 0.02, N = 3 7.31 7.30 7.24 1. Nodejs
v12.18.2
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: Kostya 3 2 R1 0.1103 0.2206 0.3309 0.4412 0.5515 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.49 0.49 0.49 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: LargeRandom 3 2 R1 0.0743 0.1486 0.2229 0.2972 0.3715 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.33 0.33 0.33 1. (CXX) g++ options: -O3 -pthread
Libplacebo Libplacebo is a multimedia rendering library based on the core rendering code of the MPV player. The libplacebo benchmark relies on the Vulkan API and tests various primitives. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better Libplacebo 2.72.2 Test: av1_grain_lap R1 2 3 120 240 360 480 600 SE +/- 0.34, N = 3 SE +/- 1.77, N = 3 SE +/- 1.17, N = 3 537.11 534.51 534.15 1. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.org FPS, More Is Better Libplacebo 2.72.2 Test: hdr_peakdetect 3 2 R1 7K 14K 21K 28K 35K SE +/- 401.65, N = 3 SE +/- 358.35, N = 3 SE +/- 290.63, N = 3 32916.08 32911.33 32875.89 1. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.org FPS, More Is Better Libplacebo 2.72.2 Test: polar_nocompute 3 R1 2 6 12 18 24 30 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 23.73 23.73 23.72 1. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.org FPS, More Is Better Libplacebo 2.72.2 Test: deband_heavy R1 3 2 9 18 27 36 45 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 SE +/- 0.00, N = 3 38.65 38.63 38.61 1. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: PartialTweets 3 2 R1 0.1305 0.261 0.3915 0.522 0.6525 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.58 0.58 0.58 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: DistinctUserID 3 2 R1 0.1328 0.2656 0.3984 0.5312 0.664 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.59 0.59 0.59 1. (CXX) g++ options: -O3 -pthread
PHPBench PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Score, More Is Better PHPBench 0.8.1 PHP Benchmark Suite 3 R1 2 110K 220K 330K 440K 550K SE +/- 222.67, N = 3 SE +/- 498.65, N = 3 SE +/- 388.18, N = 3 530230 529100 528447
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU 2 3 R1 2 4 6 8 10 SE +/- 0.09974, N = 3 SE +/- 0.10395, N = 15 SE +/- 0.06965, N = 15 6.80809 6.81423 7.01952 MIN: 6.36 MIN: 6.35 MIN: 6.61 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Twofish-XTS 512b Encryption 2 R1 3 70 140 210 280 350 SE +/- 0.57, N = 3 SE +/- 0.73, N = 3 SE +/- 0.15, N = 3 315.7 314.9 314.4
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Serpent-XTS 512b Decryption 2 R1 3 110 220 330 440 550 SE +/- 1.01, N = 3 SE +/- 1.22, N = 3 SE +/- 0.51, N = 3 490.5 488.7 487.8
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Serpent-XTS 512b Encryption 2 3 R1 110 220 330 440 550 SE +/- 0.47, N = 3 SE +/- 1.01, N = 3 SE +/- 0.87, N = 3 506.8 506.4 505.2
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup AES-XTS 512b Decryption 2 R1 3 300 600 900 1200 1500 SE +/- 13.32, N = 3 SE +/- 7.00, N = 3 SE +/- 6.22, N = 3 1289.3 1285.1 1273.4
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup AES-XTS 512b Encryption R1 2 3 300 600 900 1200 1500 SE +/- 6.50, N = 3 SE +/- 6.69, N = 3 SE +/- 2.40, N = 3 1304.9 1287.5 1285.9
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Twofish-XTS 256b Decryption 2 3 R1 70 140 210 280 350 SE +/- 0.26, N = 3 SE +/- 0.86, N = 3 SE +/- 0.48, N = 3 316.0 315.9 314.8
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Twofish-XTS 256b Encryption 3 2 R1 70 140 210 280 350 SE +/- 0.35, N = 3 SE +/- 0.43, N = 3 SE +/- 0.84, N = 3 315.2 314.3 313.2
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Serpent-XTS 256b Decryption R1 3 2 110 220 330 440 550 SE +/- 0.20, N = 3 SE +/- 1.51, N = 3 SE +/- 1.59, N = 3 489.0 488.7 488.6
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup Serpent-XTS 256b Encryption 2 R1 3 110 220 330 440 550 SE +/- 0.92, N = 3 SE +/- 0.99, N = 3 SE +/- 1.12, N = 3 505.4 505.1 503.4
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup AES-XTS 256b Decryption 2 R1 3 300 600 900 1200 1500 SE +/- 15.21, N = 3 SE +/- 11.32, N = 3 SE +/- 11.68, N = 3 1586.7 1577.4 1559.6
OpenBenchmarking.org MiB/s, More Is Better Cryptsetup AES-XTS 256b Encryption 3 2 R1 300 600 900 1200 1500 SE +/- 1.02, N = 3 SE +/- 15.55, N = 3 SE +/- 19.58, N = 3 1590.4 1581.4 1577.5
OpenBenchmarking.org Iterations Per Second, More Is Better Cryptsetup PBKDF2-whirlpool R1 3 2 110K 220K 330K 440K 550K SE +/- 335.33, N = 3 SE +/- 335.33, N = 3 513672 513336 513001
OpenBenchmarking.org Iterations Per Second, More Is Better Cryptsetup PBKDF2-sha512 R1 3 2 300K 600K 900K 1200K 1500K SE +/- 1520.33, N = 3 SE +/- 878.73, N = 3 SE +/- 2024.67, N = 3 1263348 1263345 1262843
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU 2 3 R1 7 14 21 28 35 SE +/- 0.34, N = 3 SE +/- 0.27, N = 3 SE +/- 0.29, N = 3 31.70 31.97 32.11 MIN: 30.17 MIN: 30.65 MIN: 30.06 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU R1 2 3 7 14 21 28 35 SE +/- 0.32, N = 3 SE +/- 0.29, N = 3 SE +/- 0.24, N = 3 29.15 29.40 29.48 MIN: 27.38 MIN: 28 MIN: 27.97 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
VkResample VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better VkResample 1.0 Upscale: 2x - Precision: Double R1 2 3 30 60 90 120 150 SE +/- 1.71, N = 3 SE +/- 1.45, N = 3 SE +/- 0.61, N = 3 152.68 155.24 156.01 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org ms, Fewer Is Better VkResample 1.0 Upscale: 2x - Precision: Single R1 2 3 30 60 90 120 150 SE +/- 0.49, N = 3 SE +/- 1.86, N = 3 SE +/- 1.40, N = 3 152.24 152.38 153.25 1. (CXX) g++ options: -O3 -pthread
Opus Codec Encoding Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Opus Codec Encoding 1.3.1 WAV To Opus Encode R1 2 3 3 6 9 12 15 SE +/- 0.03, N = 5 SE +/- 0.02, N = 5 SE +/- 0.04, N = 5 10.89 10.90 10.92 1. (CXX) g++ options: -fvisibility=hidden -logg -lm
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU R1 3 2 5 10 15 20 25 SE +/- 0.31, N = 3 SE +/- 0.23, N = 3 SE +/- 0.27, N = 3 21.00 21.08 21.09 MIN: 19.61 MIN: 19.85 MIN: 20.08 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU R1 2 3 3 6 9 12 15 SE +/- 0.08, N = 3 SE +/- 0.08, N = 3 SE +/- 0.08, N = 3 13.29 13.33 13.46 MIN: 12.99 MIN: 13.01 MIN: 13.01 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU 2 3 R1 3 6 9 12 15 SE +/- 0.02320, N = 3 SE +/- 0.13687, N = 4 SE +/- 0.04347, N = 3 9.12287 10.07928 10.10940 MIN: 8.62 MIN: 9.14 MIN: 9.65 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU R1 2 3 4 8 12 16 20 SE +/- 0.04, N = 3 SE +/- 0.06, N = 3 SE +/- 0.01, N = 3 15.45 15.54 15.56 MIN: 14.57 MIN: 14.82 MIN: 14.82 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU 2 3 R1 5 10 15 20 25 SE +/- 0.13, N = 3 SE +/- 0.05, N = 3 SE +/- 0.18, N = 3 20.52 20.63 20.91 MIN: 20.16 MIN: 20.29 MIN: 20.39 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU 3 2 R1 8 16 24 32 40 SE +/- 0.04, N = 3 SE +/- 0.06, N = 3 SE +/- 0.13, N = 3 33.47 33.50 33.88 MIN: 33.25 MIN: 33.31 MIN: 33.42 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU 3 2 R1 7 14 21 28 35 SE +/- 0.02, N = 3 SE +/- 0.09, N = 3 SE +/- 0.13, N = 3 27.30 27.75 27.80 MIN: 26.98 MIN: 27.23 MIN: 27.49 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU R1 2 3 6 12 18 24 30 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 26.20 26.22 26.25 MIN: 26.09 MIN: 26.12 MIN: 26.08 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU R1 2 3 8 16 24 32 40 SE +/- 0.04, N = 3 SE +/- 0.04, N = 3 SE +/- 0.03, N = 3 35.35 35.64 35.80 MIN: 33.41 MIN: 33.73 MIN: 33.94 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
R1 Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq ondemand - CPU Microcode: 0x2f - Thermald 2.3Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Testing initiated at 1 January 2021 13:28 by user phoronix.
2 Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq ondemand - CPU Microcode: 0x2f - Thermald 2.3Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Testing initiated at 1 January 2021 20:55 by user phoronix.
3 Processor: Intel Core i7-5600U @ 3.20GHz (2 Cores / 4 Threads), Motherboard: LENOVO 20BSCTO1WW (N14ET49W 1.27 BIOS), Chipset: Intel Broadwell-U-OPI, Memory: 8GB, Disk: 128GB SAMSUNG MZNTE128, Graphics: Intel HD 5500 3GB (950MHz), Audio: Intel Broadwell-U Audio, Network: Intel I218-LM + Intel 7265
OS: Ubuntu 20.10, Kernel: 5.9.1-050901-generic (x86_64), Desktop: GNOME Shell 3.38.1, Display Server: X Server 1.20.9, Display Driver: modesetting 1.20.9, OpenGL: 4.6 Mesa 21.0.0-devel (git-bd69765 2021-01-01 groovy-oibaf-ppa), OpenCL: OpenCL 3.0, Vulkan: 1.2.145, Compiler: GCC 10.2.0, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq ondemand - CPU Microcode: 0x2f - Thermald 2.3Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Testing initiated at 2 January 2021 05:53 by user phoronix.