AMD Ryzen 9 7950X3D 16-Core testing with a ASRockRack B650D4U-2L2T/BCM (2.09 BIOS) and ASPEED 512MB on Ubuntu 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
b c d Processor: AMD Ryzen 9 7950X3D 16-Core @ 5.76GHz (16 Cores / 32 Threads), Motherboard: ASRockRack B650D4U-2L2T/BCM (2.09 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 32 GB DDR5-4800MT/s MTC20C2085S1EC48BA1, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS + 0GB Virtual HDisk0 + 0GB Virtual HDisk1 + 0GB Virtual HDisk2 + 0GB Virtual HDisk3, Graphics: ASPEED 512MB, Audio: AMD Device 1640, Monitor: VA2431, Network: 2 x Intel I210 + 2 x Broadcom BCM57416 NetXtreme-E Dual-Media 10G RDMA
OS: Ubuntu 22.04, Kernel: 6.6.0-rc4-phx-amd-pref-core (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1200
QuantLib QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Multi-Threaded a b c d 20K 40K 60K 80K 100K SE +/- 83.63, N = 3 82133.5 82020.0 81958.0 82159.4 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Single-Threaded a b c d 900 1800 2700 3600 4500 SE +/- 9.84, N = 3 3980.3 4042.6 4025.4 4033.0 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm16 a b c d 300 600 900 1200 1500 SE +/- 0.59, N = 3 1496.78 1495.65 1494.53 1496.30 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm64_short a b c d 40 80 120 160 200 SE +/- 0.02, N = 3 175.91 175.92 175.92 175.86 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
QMCPACK QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: H4_ae a b c d 3 6 9 12 15 SE +/- 0.09, N = 15 12.16 12.51 12.63 12.62 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: Li2_STO_ae a b c d 30 60 90 120 150 SE +/- 1.12, N = 3 135.60 135.95 135.60 135.02 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: LiH_ae_MSD a b c d 16 32 48 64 80 SE +/- 0.40, N = 3 73.47 74.10 73.60 73.49 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: simple-H2O a b c d 5 10 15 20 25 SE +/- 0.03, N = 3 18.24 18.43 18.39 18.30 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: O_ae_pyscf_UHF a b c d 30 60 90 120 150 SE +/- 1.01, N = 3 132.77 131.36 129.98 129.65 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: FeCO6_b3lyp_gms a b c d 30 60 90 120 150 SE +/- 0.20, N = 3 125.46 125.49 124.98 125.13 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Live a b c d 60 120 180 240 300 SE +/- 1.97, N = 3 282.96 282.92 284.24 276.55 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Live a b c d 40 80 120 160 200 SE +/- 0.76, N = 3 179.55 180.06 180.02 179.29 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Upload a b c d 4 8 12 16 20 SE +/- 0.14, N = 3 16.90 16.67 16.81 16.64 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Upload a b c d 8 16 24 32 40 SE +/- 0.08, N = 3 33.54 33.53 33.46 33.57 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Platform a b c d 14 28 42 56 70 SE +/- 0.08, N = 3 63.98 63.85 63.69 63.75 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Platform a b c d 15 30 45 60 75 SE +/- 0.09, N = 3 68.39 68.36 68.29 68.63 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Video On Demand a b c d 14 28 42 56 70 SE +/- 0.11, N = 3 64.16 63.78 64.14 64.26 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Video On Demand a b c d 15 30 45 60 75 SE +/- 0.07, N = 3 68.48 68.57 68.14 68.71 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default a b c d 4 8 12 16 20 SE +/- 0.18, N = 3 13.51 13.05 13.38 13.71 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
easyWave The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 a b c d 0.4833 0.9666 1.4499 1.9332 2.4165 SE +/- 0.004, N = 3 2.148 2.089 2.101 2.090 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 a b c d 20 40 60 80 100 SE +/- 0.33, N = 3 81.63 81.03 80.87 80.10 1. (CXX) g++ options: -O3 -fopenmp
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown a b c d 7 14 21 28 35 SE +/- 0.06, N = 3 30.50 30.45 30.39 30.51 MIN: 30.29 / MAX: 31 MIN: 30.1 / MAX: 31.05 MIN: 30.17 / MAX: 30.88 MIN: 30.28 / MAX: 31.02
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown a b c d 7 14 21 28 35 SE +/- 0.04, N = 3 31.92 31.97 31.70 31.85 MIN: 31.66 / MAX: 32.67 MIN: 31.62 / MAX: 32.67 MIN: 31.4 / MAX: 32.36 MIN: 31.57 / MAX: 32.5
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon a b c d 7 14 21 28 35 SE +/- 0.06, N = 3 30.93 30.89 30.83 30.83 MIN: 30.8 / MAX: 31.38 MIN: 30.64 / MAX: 31.42 MIN: 30.69 / MAX: 31.19 MIN: 30.69 / MAX: 31.27
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj a b c d 6 12 18 24 30 SE +/- 0.04, N = 3 27.58 27.52 27.54 27.53 MIN: 27.4 / MAX: 28.05 MIN: 27.3 / MAX: 28.13 MIN: 27.35 / MAX: 27.99 MIN: 27.38 / MAX: 27.96
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon a b c d 8 16 24 32 40 SE +/- 0.07, N = 3 33.78 33.73 33.65 33.72 MIN: 33.54 / MAX: 34.43 MIN: 33.38 / MAX: 34.7 MIN: 33.44 / MAX: 34.17 MIN: 33.46 / MAX: 34.55
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj a b c d 7 14 21 28 35 SE +/- 0.05, N = 3 28.61 28.71 28.64 28.70 MIN: 28.41 / MAX: 29.24 MIN: 28.44 / MAX: 29.67 MIN: 28.43 / MAX: 29.3 MIN: 28.49 / MAX: 29.4
OpenVKL OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU ISPC a b c d 130 260 390 520 650 SE +/- 0.00, N = 3 603 603 604 603 MIN: 46 / MAX: 8290 MIN: 46 / MAX: 8279 MIN: 46 / MAX: 8291 MIN: 46 / MAX: 8277
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar a b c d 50 100 150 200 250 SE +/- 0.67, N = 3 242 242 242 241 MIN: 17 / MAX: 4418 MIN: 16 / MAX: 4422 MIN: 17 / MAX: 4423 MIN: 16 / MAX: 4419
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a b c d 0.4367 0.8734 1.3101 1.7468 2.1835 SE +/- 0.02190, N = 3 1.89379 1.94087 1.92566 1.93518 MIN: 1.7 MIN: 1.73 MIN: 1.71 MIN: 1.72 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b c d 0.6342 1.2684 1.9026 2.5368 3.171 SE +/- 0.01747, N = 3 2.80183 2.81737 2.80377 2.81873 MIN: 2.76 MIN: 2.75 MIN: 2.76 MIN: 2.78 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU a b c d 0.1203 0.2406 0.3609 0.4812 0.6015 SE +/- 0.010372, N = 15 0.523659 0.498741 0.534576 0.520575 MIN: 0.43 MIN: 0.39 MIN: 0.43 MIN: 0.43 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU a b c d 0.0628 0.1256 0.1884 0.2512 0.314 SE +/- 0.003205, N = 13 0.262956 0.279128 0.248788 0.256962 MIN: 0.25 MIN: 0.24 MIN: 0.24 MIN: 0.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU a b c d 0.1565 0.313 0.4695 0.626 0.7825 SE +/- 0.002235, N = 3 0.694296 0.693123 0.695339 0.695171 MIN: 0.65 MIN: 0.64 MIN: 0.65 MIN: 0.65 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU a b c d 0.2784 0.5568 0.8352 1.1136 1.392 SE +/- 0.01251, N = 5 1.23712 1.18558 1.18372 1.17668 MIN: 1.17 MIN: 1.08 MIN: 1.11 MIN: 1.1 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b c d 0.9097 1.8194 2.7291 3.6388 4.5485 SE +/- 0.00942, N = 3 4.01781 4.03418 4.01114 4.04319 MIN: 3.96 MIN: 3.94 MIN: 3.96 MIN: 3.98 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b c d 0.6775 1.355 2.0325 2.71 3.3875 SE +/- 0.00759, N = 3 2.98536 3.01099 3.00906 2.98785 MIN: 2.51 MIN: 2.51 MIN: 2.51 MIN: 2.51 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b c d 0.5848 1.1696 1.7544 2.3392 2.924 SE +/- 0.00014, N = 3 2.59896 2.59838 2.59835 2.59886 MIN: 2.59 MIN: 2.59 MIN: 2.59 MIN: 2.59 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU a b c d 0.8478 1.6956 2.5434 3.3912 4.239 SE +/- 0.01733, N = 3 3.76779 3.71650 3.72781 3.76192 MIN: 3.68 MIN: 3.63 MIN: 3.66 MIN: 3.67 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU a b c d 0.1027 0.2054 0.3081 0.4108 0.5135 SE +/- 0.000061, N = 3 0.455531 0.456347 0.456250 0.455553 MIN: 0.44 MIN: 0.44 MIN: 0.44 MIN: 0.44 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b c d 0.1457 0.2914 0.4371 0.5828 0.7285 SE +/- 0.000222, N = 3 0.647550 0.646866 0.646223 0.645231 MIN: 0.64 MIN: 0.64 MIN: 0.64 MIN: 0.64 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b c d 300 600 900 1200 1500 SE +/- 0.51, N = 3 1227.58 1229.80 1234.14 1224.00 MIN: 1224.62 MIN: 1224.55 MIN: 1229.87 MIN: 1220.91 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b c d 140 280 420 560 700 SE +/- 0.32, N = 3 620.71 636.50 632.39 625.50 MIN: 617.84 MIN: 632.82 MIN: 629.36 MIN: 622.67 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU a b c d 300 600 900 1200 1500 SE +/- 0.95, N = 3 1231.93 1236.68 1236.28 1224.11 MIN: 1227.81 MIN: 1230.73 MIN: 1233.21 MIN: 1220.23 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU a b c d 0.2474 0.4948 0.7422 0.9896 1.237 SE +/- 0.00130, N = 3 1.09425 1.09760 1.09942 1.09782 MIN: 1.07 MIN: 1.07 MIN: 1.07 MIN: 1.07 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU a b c d 0.5272 1.0544 1.5816 2.1088 2.636 SE +/- 0.00064, N = 3 2.34329 2.34125 2.33922 2.34110 MIN: 2.31 MIN: 2.31 MIN: 2.31 MIN: 2.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU a b c d 0.3347 0.6694 1.0041 1.3388 1.6735 SE +/- 0.00176, N = 3 1.48385 1.48496 1.48752 1.48153 MIN: 1.47 MIN: 1.47 MIN: 1.48 MIN: 1.47 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU a b c d 140 280 420 560 700 SE +/- 1.22, N = 3 633.91 635.30 624.14 634.66 MIN: 629.4 MIN: 629.42 MIN: 621.15 MIN: 631.75 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU a b c d 300 600 900 1200 1500 SE +/- 2.80, N = 3 1236.29 1231.68 1228.25 1234.57 MIN: 1232.2 MIN: 1222.84 MIN: 1224.56 MIN: 1231 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU a b c d 140 280 420 560 700 SE +/- 2.14, N = 3 632.07 633.61 628.31 635.31 MIN: 628.15 MIN: 626.34 MIN: 625.84 MIN: 632.15 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c d 900 1800 2700 3600 4500 SE +/- 3.51, N = 3 4251 4267 4239 4251
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c d 900 1800 2700 3600 4500 SE +/- 4.41, N = 3 4303 4323 4316 4297
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c d 1100 2200 3300 4400 5500 SE +/- 4.26, N = 3 5025 5027 5035 5042
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c d 16K 32K 48K 64K 80K SE +/- 88.19, N = 3 71605 72346 72590 71802
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c d 30K 60K 90K 120K 150K SE +/- 200.00, N = 3 139956 140342 140119 139631
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c d 16K 32K 48K 64K 80K SE +/- 124.54, N = 3 73112 72815 72888 72762
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c d 30K 60K 90K 120K 150K SE +/- 201.33, N = 3 141550 141773 141698 141360
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c d 20K 40K 60K 80K 100K SE +/- 192.43, N = 3 84397 84660 84786 84573
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c d 40K 80K 120K 160K 200K SE +/- 70.29, N = 3 165253 165034 164625 164645
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c d 200 400 600 800 1000 SE +/- 0.58, N = 3 1069 1069 1069 1068
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c d 200 400 600 800 1000 SE +/- 0.58, N = 3 1083 1083 1083 1085
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c d 300 600 900 1200 1500 SE +/- 2.65, N = 3 1267 1266 1265 1264
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c d 4K 8K 12K 16K 20K SE +/- 12.41, N = 3 17116 17116 17097 17075
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c d 8K 16K 24K 32K 40K SE +/- 262.57, N = 3 38499 38279 38448 38512
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c d 4K 8K 12K 16K 20K SE +/- 12.45, N = 3 17266 17316 17235 17289
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c d 8K 16K 24K 32K 40K SE +/- 34.12, N = 3 38852 38740 38798 39062
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c d 4K 8K 12K 16K 20K SE +/- 20.65, N = 3 20148 20226 20186 20139
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c d 10K 20K 30K 40K 50K SE +/- 116.17, N = 3 44670 44437 44682 44177
Cpuminer-Opt Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Magi a b c d 140 280 420 560 700 SE +/- 0.61, N = 3 640.30 635.66 636.52 635.79 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: scrypt a b c d 70 140 210 280 350 SE +/- 0.25, N = 3 304.46 304.62 305.22 305.36 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Deepcoin a b c d 2K 4K 6K 8K 10K SE +/- 21.85, N = 3 7978.24 7965.21 7982.94 7973.87 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Ringcoin a b c d 700 1400 2100 2800 3500 SE +/- 2.45, N = 3 3455.06 3355.73 3367.45 3350.86 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Blake-2 S a b c d 30K 60K 90K 120K 150K SE +/- 3.33, N = 3 134660 134177 134800 135200 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Garlicoin a b c d 400 800 1200 1600 2000 SE +/- 2.78, N = 3 1796.71 1783.76 1783.94 1844.59 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Skeincoin a b c d 7K 14K 21K 28K 35K SE +/- 5.77, N = 3 34440 34440 34430 34470 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Myriad-Groestl a b c d 2K 4K 6K 8K 10K SE +/- 50.00, N = 3 11440 11490 11400 11390 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: LBC, LBRY Credits a b c d 3K 6K 9K 12K 15K SE +/- 3.33, N = 3 15850 15783 15790 15780 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Quad SHA-256, Pyrite a b c d 13K 26K 39K 52K 65K SE +/- 16.67, N = 3 62080 62127 62080 62050 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Triple SHA-256, Onecoin a b c d 20K 40K 60K 80K 100K SE +/- 3.33, N = 3 105880 105897 105890 105860 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: SHA256 a b c d 7000M 14000M 21000M 28000M 35000M 32631708710 32907394910 32833153190 32930127410 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: SHA512 a b c d 2000M 4000M 6000M 8000M 10000M 10638469960 10635835530 10649075500 10639231370 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org sign/s, More Is Better OpenSSL Algorithm: RSA4096 a b c d 1200 2400 3600 4800 6000 5501.2 5508.4 5468.4 5500.8 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org verify/s, More Is Better OpenSSL Algorithm: RSA4096 a b c d 80K 160K 240K 320K 400K 358677.5 359044.2 358701.3 358447.1 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: ChaCha20 a b c d 30000M 60000M 90000M 120000M 150000M 125488652010 125127430960 125338580040 125277063850 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: AES-128-GCM a b c d 20000M 40000M 60000M 80000M 100000M 98960081440 98958335180 98987719340 99011030490 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: AES-256-GCM a b c d 20000M 40000M 60000M 80000M 100000M 92480786980 92512692630 92479315690 92479134650 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: ChaCha20-Poly1305 a b c d 20000M 40000M 60000M 80000M 100000M 89461502230 89582688390 89478928250 89488403930 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
RabbitMQ RabbitMQ is an open-source message broker. This test profile makes use of the RabbitMQ PerfTest with the RabbitMQ server and PerfTest client running on the same host namely as a system/CPU performance benchmark. Learn more via the OpenBenchmarking.org test page.
Scenario: Simple 2 Publishers + 4 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
Scenario: 10 Queues, 100 Producers, 100 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
Scenario: 60 Queues, 100 Producers, 100 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
Scenario: 120 Queues, 400 Producers, 400 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
Scenario: 200 Queues, 400 Producers, 400 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
PyTorch OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b c d 16 32 48 64 80 69.41 70.39 69.05 68.99 MIN: 64.47 / MAX: 71.04 MIN: 65.98 / MAX: 71.59 MIN: 64.46 / MAX: 70.37 MIN: 65.18 / MAX: 70.36
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 a b c d 7 14 21 28 35 26.99 26.73 27.02 27.63 MIN: 25.92 / MAX: 27.21 MIN: 25.73 / MAX: 27.4 MIN: 26.41 / MAX: 27.29 MIN: 26.37 / MAX: 27.91
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b c d 11 22 33 44 55 45.70 46.73 45.94 45.16 MIN: 43.16 / MAX: 46.31 MIN: 43.51 / MAX: 47.16 MIN: 36.22 / MAX: 46.45 MIN: 42.72 / MAX: 46.38
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b c d 11 22 33 44 55 44.97 45.98 45.91 47.70 MIN: 42.18 / MAX: 46.53 MIN: 43.4 / MAX: 46.68 MIN: 41.91 / MAX: 46.67 MIN: 44.77 / MAX: 48.1
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 a b c d 11 22 33 44 55 45.45 47.01 46.10 45.43 MIN: 34.94 / MAX: 45.89 MIN: 42.75 / MAX: 47.57 MIN: 35.44 / MAX: 46.83 MIN: 43.43 / MAX: 46.04
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 a b c d 4 8 12 16 20 18.13 18.02 18.03 18.02 MIN: 17.59 / MAX: 18.32 MIN: 17.53 / MAX: 18.23 MIN: 17.54 / MAX: 18.23 MIN: 17.66 / MAX: 18.18
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 a b c d 11 22 33 44 55 45.09 46.29 46.16 47.06 MIN: 43.25 / MAX: 45.61 MIN: 42.39 / MAX: 46.81 MIN: 42.11 / MAX: 46.89 MIN: 42.87 / MAX: 47.87
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 a b c d 4 8 12 16 20 18.04 18.06 18.05 18.14 MIN: 17.52 / MAX: 18.19 MIN: 17.57 / MAX: 18.26 MIN: 17.55 / MAX: 18.12 MIN: 17.74 / MAX: 18.22
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 a b c d 11 22 33 44 55 46.31 46.36 46.41 46.93 MIN: 43.53 / MAX: 47.23 MIN: 42.96 / MAX: 47.26 MIN: 43.7 / MAX: 46.95 MIN: 44.02 / MAX: 47.5
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 a b c d 4 8 12 16 20 17.80 18.07 17.75 18.19 MIN: 17.33 / MAX: 18.03 MIN: 16.99 / MAX: 18.18 MIN: 17.45 / MAX: 17.89 MIN: 17.82 / MAX: 18.26
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 a b c d 4 8 12 16 20 17.96 18.11 18.11 18.19 MIN: 17.36 / MAX: 18.16 MIN: 17.67 / MAX: 18.23 MIN: 17.68 / MAX: 18.22 MIN: 14.43 / MAX: 18.68
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 a b c d 5 10 15 20 25 18.30 17.81 18.16 17.95 MIN: 17.67 / MAX: 18.39 MIN: 17.54 / MAX: 18.01 MIN: 17.66 / MAX: 18.27 MIN: 17.53 / MAX: 18.17
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l a b c d 4 8 12 16 20 14.20 14.12 14.01 14.25 MIN: 14.07 / MAX: 14.34 MIN: 13.18 / MAX: 14.23 MIN: 13.86 / MAX: 14.16 MIN: 14.08 / MAX: 14.36
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l a b c d 3 6 9 12 15 10.62 10.86 10.72 10.97 MIN: 9.39 / MAX: 10.82 MIN: 9.5 / MAX: 11.07 MIN: 9.34 / MAX: 10.88 MIN: 9.39 / MAX: 11.28
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l a b c d 3 6 9 12 15 10.81 10.89 10.87 10.92 MIN: 9.24 / MAX: 10.96 MIN: 9.54 / MAX: 11.07 MIN: 9.55 / MAX: 11.02 MIN: 9.26 / MAX: 11.09
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l a b c d 3 6 9 12 15 10.86 10.93 10.82 10.90 MIN: 9.49 / MAX: 11 MIN: 8.84 / MAX: 11.11 MIN: 9.26 / MAX: 10.96 MIN: 9.57 / MAX: 11.05
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l a b c d 3 6 9 12 15 10.84 10.93 10.88 10.79 MIN: 9.5 / MAX: 10.98 MIN: 9.59 / MAX: 11.11 MIN: 9.61 / MAX: 11.08 MIN: 9.49 / MAX: 10.96
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l a b c d 3 6 9 12 15 10.93 11.04 10.85 10.94 MIN: 9.18 / MAX: 11.07 MIN: 9.87 / MAX: 11.21 MIN: 9.35 / MAX: 11.06 MIN: 9.62 / MAX: 11.08
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU a b c d 3 6 9 12 15 13.41 13.38 13.33 13.39 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU a b c d 130 260 390 520 650 595.55 595.36 597.20 594.98 MIN: 576.09 / MAX: 622.5 MIN: 575.04 / MAX: 623.57 MIN: 574.49 / MAX: 623.82 MIN: 577.06 / MAX: 624.47 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU a b c d 20 40 60 80 100 94.20 93.95 94.29 94.99 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU a b c d 20 40 60 80 100 84.83 85.07 84.74 84.14 MIN: 51.55 / MAX: 110.45 MIN: 55.85 / MAX: 113.02 MIN: 44.18 / MAX: 109.72 MIN: 49.25 / MAX: 110.52 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU a b c d 20 40 60 80 100 94.05 94.66 93.76 94.28 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU a b c d 20 40 60 80 100 84.99 84.43 85.27 84.74 MIN: 54.5 / MAX: 109.88 MIN: 38.96 / MAX: 118.46 MIN: 56.98 / MAX: 111.16 MIN: 43.16 / MAX: 116.66 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU a b c d 200 400 600 800 1000 1034.25 1032.48 1032.00 1033.33 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU a b c d 2 4 6 8 10 7.71 7.73 7.73 7.72 MIN: 4.99 / MAX: 14.68 MIN: 4.78 / MAX: 16.94 MIN: 4.84 / MAX: 13.08 MIN: 4.51 / MAX: 13.75 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b c d 6 12 18 24 30 25.52 25.56 25.56 25.50 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b c d 70 140 210 280 350 313.05 312.53 312.48 313.25 MIN: 299.21 / MAX: 324.17 MIN: 300.51 / MAX: 321.6 MIN: 296.91 / MAX: 323.74 MIN: 299.83 / MAX: 323.76 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU a b c d 700 1400 2100 2800 3500 3070.77 3063.55 3067.52 3069.35 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU a b c d 0.5625 1.125 1.6875 2.25 2.8125 2.49 2.50 2.50 2.49 MIN: 1.35 / MAX: 6.3 MIN: 1.34 / MAX: 9.48 MIN: 1.34 / MAX: 9.59 MIN: 1.35 / MAX: 9.24 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU a b c d 100 200 300 400 500 439.75 435.77 437.76 434.98 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU a b c d 5 10 15 20 25 18.16 18.32 18.24 18.36 MIN: 9.71 / MAX: 27.64 MIN: 12.74 / MAX: 30.01 MIN: 12.24 / MAX: 26.29 MIN: 9.65 / MAX: 27.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b c d 300 600 900 1200 1500 1613.22 1617.79 1617.82 1619.16 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b c d 1.107 2.214 3.321 4.428 5.535 4.92 4.91 4.90 4.90 MIN: 2.75 / MAX: 10.58 MIN: 2.77 / MAX: 14.1 MIN: 2.76 / MAX: 13.8 MIN: 2.75 / MAX: 9.12 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b c d 300 600 900 1200 1500 1353.91 1352.89 1353.62 1351.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b c d 3 6 9 12 15 11.79 11.80 11.79 11.81 MIN: 6.37 / MAX: 23.3 MIN: 7.57 / MAX: 15.99 MIN: 6.2 / MAX: 21.55 MIN: 6.78 / MAX: 18.07 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU a b c d 1000 2000 3000 4000 5000 4527.76 4537.36 4512.34 4544.57 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU a b c d 0.774 1.548 2.322 3.096 3.87 3.44 3.43 3.44 3.42 MIN: 1.95 / MAX: 10.99 MIN: 1.96 / MAX: 8.28 MIN: 1.94 / MAX: 11.06 MIN: 1.94 / MAX: 6.89 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b c d 120 240 360 480 600 524.58 522.64 532.94 521.46 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b c d 4 8 12 16 20 15.23 15.28 14.99 15.32 MIN: 11.89 / MAX: 21.11 MIN: 9.17 / MAX: 19.71 MIN: 11.64 / MAX: 20.21 MIN: 12.62 / MAX: 21 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b c d 30 60 90 120 150 130.05 130.95 130.06 131.09 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b c d 14 28 42 56 70 61.38 60.98 61.38 60.94 MIN: 46.13 / MAX: 71.27 MIN: 27.92 / MAX: 70.86 MIN: 44.4 / MAX: 70.65 MIN: 46.99 / MAX: 72.58 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c d 600 1200 1800 2400 3000 2608.39 2604.12 2609.57 2603.35 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c d 2 4 6 8 10 6.10 6.11 6.10 6.11 MIN: 3.19 / MAX: 11.01 MIN: 3.19 / MAX: 13.98 MIN: 3.18 / MAX: 11.91 MIN: 3.18 / MAX: 11.79 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c d 300 600 900 1200 1500 1572.38 1587.74 1591.84 1572.56 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c d 1.1385 2.277 3.4155 4.554 5.6925 5.06 5.02 5.00 5.06 MIN: 3.62 / MAX: 13.34 MIN: 3.6 / MAX: 10.68 MIN: 3.63 / MAX: 11.88 MIN: 3.24 / MAX: 9.55 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU a b c d 160 320 480 640 800 739.56 731.42 733.81 733.36 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU a b c d 5 10 15 20 25 21.61 21.85 21.77 21.79 MIN: 15.02 / MAX: 30.51 MIN: 14.62 / MAX: 38.4 MIN: 17.91 / MAX: 28.95 MIN: 14.67 / MAX: 31.51 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b c d 7K 14K 21K 28K 35K 33511.79 33483.53 33482.54 33491.83 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b c d 0.0968 0.1936 0.2904 0.3872 0.484 0.43 0.43 0.43 0.43 MIN: 0.22 / MAX: 4.2 MIN: 0.22 / MAX: 4.19 MIN: 0.22 / MAX: 7.73 MIN: 0.22 / MAX: 5.02 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c d 130 260 390 520 650 584.10 585.55 587.47 579.56 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c d 6 12 18 24 30 27.36 27.29 27.21 27.57 MIN: 19.64 / MAX: 35.39 MIN: 21.85 / MAX: 35.83 MIN: 22.22 / MAX: 34.96 MIN: 20.3 / MAX: 33.35 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c d 10K 20K 30K 40K 50K 47376.59 47316.17 47255.36 47453.43 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c d 0.0675 0.135 0.2025 0.27 0.3375 0.29 0.29 0.30 0.29 MIN: 0.17 / MAX: 7.87 MIN: 0.17 / MAX: 7.59 MIN: 0.17 / MAX: 7.08 MIN: 0.17 / MAX: 7.66 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 21 November 2023 16:01 by user root.
b Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 21 November 2023 20:09 by user root.
c Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 22 November 2023 05:51 by user root.
d Processor: AMD Ryzen 9 7950X3D 16-Core @ 5.76GHz (16 Cores / 32 Threads), Motherboard: ASRockRack B650D4U-2L2T/BCM (2.09 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 32 GB DDR5-4800MT/s MTC20C2085S1EC48BA1, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS + 0GB Virtual HDisk0 + 0GB Virtual HDisk1 + 0GB Virtual HDisk2 + 0GB Virtual HDisk3, Graphics: ASPEED 512MB, Audio: AMD Device 1640, Monitor: VA2431, Network: 2 x Intel I210 + 2 x Broadcom BCM57416 NetXtreme-E Dual-Media 10G RDMA
OS: Ubuntu 22.04, Kernel: 6.6.0-rc4-phx-amd-pref-core (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1200
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 22 November 2023 09:53 by user root.