AMD Ryzen 7 5800X3D 8-Core testing with a ASUS ROG CROSSHAIR VIII HERO (4201 BIOS) and Intel DG2 8GB on Ubuntu 22.04 via the Phoronix Test Suite.
A Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa20120aPython Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
B C D Processor: AMD Ryzen 7 5800X3D 8-Core @ 3.40GHz (8 Cores / 16 Threads), Motherboard: ASUS ROG CROSSHAIR VIII HERO (4201 BIOS), Chipset: AMD Starship/Matisse, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0 + 2000GB, Graphics: Intel DG2 8GB (2400MHz), Audio: Intel Device 4f90, Monitor: ASUS VP28U, Network: Realtek RTL8125 2.5GbE + Intel I211
OS: Ubuntu 22.04, Kernel: 5.15.47+prerelease3723 (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server 1.21.1.3 + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-44289c46d9), Vulkan: 1.3.219, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
5800x3d smoke okt OpenBenchmarking.org Phoronix Test Suite AMD Ryzen 7 5800X3D 8-Core @ 3.40GHz (8 Cores / 16 Threads) ASUS ROG CROSSHAIR VIII HERO (4201 BIOS) AMD Starship/Matisse 32GB 1000GB Western Digital WDS100T1X0E-00AFY0 + 2000GB Intel DG2 8GB (2400MHz) Intel Device 4f90 ASUS VP28U Realtek RTL8125 2.5GbE + Intel I211 Ubuntu 22.04 5.15.47+prerelease3723 (x86_64) GNOME Shell 42.2 X Server 1.21.1.3 + Wayland 4.6 Mesa 22.2.0-devel (git-44289c46d9) 1.3.219 GCC 11.2.0 ext4 3840x2160 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution 5800x3d Smoke Okt Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa20120a - Python 3.10.4 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
A B C D Result Overview Phoronix Test Suite 100% 100% 101% 101% 102% oneDNN PostgreSQL OpenRadioss spaCy Y-Cruncher SMHasher QuadRay AOM AV1
5800x3d smoke okt smhasher: wyhash smhasher: SHA3-256 smhasher: Spooky32 smhasher: fasthash32 smhasher: FarmHash128 smhasher: t1ha2_atonce smhasher: FarmHash32 x86_64 AVX smhasher: t1ha0_aes_avx2 x86_64 smhasher: MeowHash x86_64 AES-NI openradioss: Bumper Beam openradioss: Cell Phone Drop Test openradioss: Bird Strike on Windshield openradioss: Rubber O-Ring Seal Installation openradioss: INIVOL and Fluid Structure Interaction Drop Container tensorflow: CPU - 16 - VGG-16 tensorflow: CPU - 32 - VGG-16 tensorflow: CPU - 64 - VGG-16 tensorflow: CPU - 16 - AlexNet tensorflow: CPU - 256 - VGG-16 tensorflow: CPU - 32 - AlexNet y-cruncher: 500M tensorflow: CPU - 64 - AlexNet tensorflow: CPU - 256 - AlexNet tensorflow: CPU - 512 - AlexNet tensorflow: CPU - 16 - GoogLeNet y-cruncher: 1B tensorflow: CPU - 16 - ResNet-50 tensorflow: CPU - 32 - GoogLeNet tensorflow: CPU - 32 - ResNet-50 tensorflow: CPU - 64 - GoogLeNet tensorflow: CPU - 64 - ResNet-50 tensorflow: CPU - 256 - GoogLeNet tensorflow: CPU - 256 - ResNet-50 tensorflow: CPU - 512 - GoogLeNet deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream spacy: en_core_web_lg spacy: en_core_web_trf onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU aom-av1: Speed 0 Two-Pass - Bosphorus 4K aom-av1: Speed 4 Two-Pass - Bosphorus 4K aom-av1: Speed 6 Realtime - Bosphorus 4K aom-av1: Speed 6 Two-Pass - Bosphorus 4K aom-av1: Speed 8 Realtime - Bosphorus 4K aom-av1: Speed 9 Realtime - Bosphorus 4K aom-av1: Speed 10 Realtime - Bosphorus 4K aom-av1: Speed 0 Two-Pass - Bosphorus 1080p aom-av1: Speed 4 Two-Pass - Bosphorus 1080p aom-av1: Speed 6 Realtime - Bosphorus 1080p aom-av1: Speed 6 Two-Pass - Bosphorus 1080p aom-av1: Speed 8 Realtime - Bosphorus 1080p aom-av1: Speed 9 Realtime - Bosphorus 1080p aom-av1: Speed 10 Realtime - Bosphorus 1080p quadray: 1 - 4K quadray: 2 - 4K quadray: 3 - 4K quadray: 5 - 4K quadray: 1 - 1080p quadray: 2 - 1080p quadray: 3 - 1080p quadray: 5 - 1080p pgbench: 1 - 1 - Read Only pgbench: 1 - 1 - Read Only - Average Latency pgbench: 1 - 1 - Read Write pgbench: 1 - 1 - Read Write - Average Latency pgbench: 1 - 50 - Read Only pgbench: 1 - 50 - Read Only - Average Latency pgbench: 1 - 100 - Read Only pgbench: 1 - 100 - Read Only - Average Latency pgbench: 1 - 50 - Read Write pgbench: 1 - 50 - Read Write - Average Latency pgbench: 100 - 1 - Read Only pgbench: 100 - 1 - Read Only - Average Latency pgbench: 1 - 100 - Read Write pgbench: 1 - 100 - Read Write - Average Latency pgbench: 100 - 1 - Read Write pgbench: 100 - 1 - Read Write - Average Latency pgbench: 100 - 50 - Read Only pgbench: 100 - 50 - Read Only - Average Latency pgbench: 100 - 100 - Read Only pgbench: 100 - 100 - Read Only - Average Latency pgbench: 100 - 50 - Read Write pgbench: 100 - 50 - Read Write - Average Latency pgbench: 100 - 100 - Read Write pgbench: 100 - 100 - Read Write - Average Latency smhasher: wyhash smhasher: SHA3-256 smhasher: Spooky32 smhasher: fasthash32 smhasher: FarmHash128 smhasher: t1ha2_atonce smhasher: FarmHash32 x86_64 AVX smhasher: t1ha0_aes_avx2 x86_64 smhasher: MeowHash x86_64 AES-NI A B C D 26419.11 191.23 19181.6 7536.68 17295.31 19795.33 33756.46 77605.07 45994.36 144.69 100.35 279.3 146.06 569.41 5.55 5.81 5.94 68.32 5.86 91.5 17.737 109.77 122.89 123.66 41.31 39.187 14.55 39.37 13.53 37.81 12.65 36.26 11.87 36.02 8.29 482.491 8.0673 123.9518 34.1768 117.0127 30.053 33.2672 45.9093 87.1056 44.9825 22.225 95.0749 42.059 84.572 11.8188 71.7306 55.7492 60.2563 16.5906 35.2514 113.4572 30.1119 33.2041 8.3235 478.9736 8.1277 123.0312 15416 740 2.93636 6.73091 1.24936 0.608646 12.7863 7.23318 5.61357 11.8886 1.79935 2.52002 2717.67 1395.86 2724.5 1392.31 1.08635 2724.19 1400.14 0.847434 0.21 7.61 36.34 14.43 58.95 80.78 84.14 0.64 17.71 74.81 44.51 156.1 189.92 199.11 8.42 2.3 1.96 0.53 32.62 8.8 7.7 2.14 37199 0.027 2507 0.399 310942 0.161 307562 0.325 3127 15.991 34362 0.029 2861 34.948 2378 0.421 299777 0.167 297424 0.336 38140 1.311 38997 2.564 16.251 2033.496 31.715 25.97 55.273 24.319 30.29 23.246 50.105 26334.38 196.15 19233.26 7602.61 17506.62 19752.83 34042.82 73947.9 45752.01 145.55 100.26 291.67 146.45 571.31 5.55 5.82 5.95 68.45 17.612 39.031 8.2282 484.9651 8.0362 124.431 34.0455 117.4646 30.0241 33.2992 46.0804 86.7801 45.0534 22.1902 95.0701 42.0602 84.5008 11.829 71.6352 55.821 60.8494 16.4288 34.9918 114.2925 30.0779 33.2415 8.3116 480.4444 8.0704 123.9041 15357 748 2.91481 6.80659 1.24604 0.610055 12.865 6.83968 5.59992 10.6745 1.7939 2.48677 2701.46 1389.29 2715.72 1385.11 1.0792 2707.34 1388.94 0.841664 0.21 7.69 36.61 14.67 58.84 81.58 85.07 0.64 17.71 73.6 44.99 155.18 189.67 200.46 8.43 2.31 1.98 0.53 32.77 9 7.75 2.14 35381 0.028 2488 0.402 311138 0.161 309250 0.323 3131 15.971 33875 0.03 2856 35.019 2400 0.417 301135 0.166 299533 0.334 37994 1.316 39365 2.54 16.297 1979.325 31.651 25.731 55.258 24.115 30.29 23.064 50.096 26302.3 183.14 19034.7 7573.13 17175.32 20051.08 33899.05 74795.04 45844.24 149.29 100.11 279.49 146.6 568.9 17.696 39.051 15401 756 2.91791 7.43205 1.24858 0.610944 13.2371 7.37916 5.61586 11.1844 1.79785 2.54366 2711.47 1391.77 2719.71 1389.43 1.08253 2717.02 1393.85 0.840243 0.21 7.66 36.36 14.52 58.18 81.46 83.56 0.64 17.73 74.15 44.92 154.69 188.95 194.65 8.38 2.31 1.97 0.53 32.66 8.9 7.62 2.14 35743 0.028 2486 0.402 308393 0.162 307586 0.325 3138 15.931 34948 0.029 16.212 2115.646 31.897 25.809 55.247 24.093 30.322 23.087 50.105 25891.34 194.55 19265.23 7589.92 17426.17 19749.24 34051.8 75226.23 45447.31 145.41 103.13 280.05 145.56 571.12 5.55 5.81 5.93 68.16 5.88 91.54 17.827 109.92 123.47 124.44 41.39 39.236 14.49 39.35 13.58 37.85 12.67 36.38 11.86 35.96 8.3401 478.2818 8.0834 123.7043 34.3275 116.501 29.9986 33.3273 46.5056 85.93 44.9935 22.2191 95.0642 42.0568 84.5271 11.826 71.7935 55.6997 60.5163 16.5194 35.2305 113.5164 30.1595 33.1516 8.3717 476.7667 8.0822 123.7235 15424 747 2.93008 7.50774 1.24944 0.613447 13.3507 7.10311 5.61171 11.3702 1.79717 2.53624 2726.78 1393.37 2722.9 1390.76 1.08663 2721.31 1393.64 0.844385 0.21 7.67 36.52 14.53 58.14 81.04 84.35 0.64 17.59 73.14 45.05 155.73 189.14 198.99 8.4 2.3 1.98 0.53 32.6 8.97 7.67 2.14 35777 0.028 2484 0.403 314549 0.159 310891 0.322 3133 15.961 35562 0.028 2873 34.81 2394 0.418 305622 0.164 300952 0.332 38108 1.312 39700 2.519 16.75 1988.985 31.549 25.754 55.258 24.063 32.559 23.064 50.25 OpenBenchmarking.org
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam A B C D 30 60 90 120 150 144.69 145.55 149.29 145.41
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: VGG-16 A B D 1.2488 2.4976 3.7464 4.9952 6.244 5.55 5.55 5.55
Device: CPU - Batch Size: 512 - Model: VGG-16
A: The test quit with a non-zero exit status. E: Fatal Python error: Aborted
D: The test quit with a non-zero exit status. E: Fatal Python error: Aborted
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 64 - Model: AlexNet A D 20 40 60 80 100 109.77 109.92
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: ResNet-50 A D 4 8 12 16 20 14.55 14.49
Device: CPU - Batch Size: 512 - Model: ResNet-50
A: The test quit with a non-zero exit status.
D: The test quit with a non-zero exit status.
spaCy The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_lg A B C D 3K 6K 9K 12K 15K 15416 15357 15401 15424
oneDNN OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU A B C D 0.6607 1.3214 1.9821 2.6428 3.3035 2.93636 2.91481 2.91791 2.93008 MIN: 2.88 MIN: 2.87 MIN: 2.86 MIN: 2.87 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU A B C D 2 4 6 8 10 6.73091 6.80659 7.43205 7.50774 MIN: 6.51 MIN: 6.65 MIN: 7.34 MIN: 7.38 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU A B C D 0.2811 0.5622 0.8433 1.1244 1.4055 1.24936 1.24604 1.24858 1.24944 MIN: 1.23 MIN: 1.23 MIN: 1.23 MIN: 1.23 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU A B C D 0.138 0.276 0.414 0.552 0.69 0.608646 0.610055 0.610944 0.613447 MIN: 0.59 MIN: 0.59 MIN: 0.6 MIN: 0.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
D: The test run did not produce a result.
Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
D: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU A B C D 3 6 9 12 15 12.79 12.87 13.24 13.35 MIN: 12.55 MIN: 12.51 MIN: 13.1 MIN: 13.18 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU A B C D 2 4 6 8 10 7.23318 6.83968 7.37916 7.10311 MIN: 5.1 MIN: 5.09 MIN: 5.1 MIN: 5.11 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU A B C D 1.2636 2.5272 3.7908 5.0544 6.318 5.61357 5.59992 5.61586 5.61171 MIN: 5.45 MIN: 5.5 MIN: 5.46 MIN: 5.46 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU A B C D 3 6 9 12 15 11.89 10.67 11.18 11.37 MIN: 10.6 MIN: 10.47 MIN: 10.98 MIN: 11.19 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU A B C D 0.4049 0.8098 1.2147 1.6196 2.0245 1.79935 1.79390 1.79785 1.79717 MIN: 1.77 MIN: 1.77 MIN: 1.77 MIN: 1.77 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU A B C D 0.5723 1.1446 1.7169 2.2892 2.8615 2.52002 2.48677 2.54366 2.53624 MIN: 2.46 MIN: 2.44 MIN: 2.49 MIN: 2.48 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU A B C D 600 1200 1800 2400 3000 2717.67 2701.46 2711.47 2726.78 MIN: 2708.62 MIN: 2690.63 MIN: 2702.18 MIN: 2706.89 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU A B C D 300 600 900 1200 1500 1395.86 1389.29 1391.77 1393.37 MIN: 1389.6 MIN: 1380.09 MIN: 1385.84 MIN: 1387.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU A B C D 600 1200 1800 2400 3000 2724.50 2715.72 2719.71 2722.90 MIN: 2717.11 MIN: 2708.81 MIN: 2712.91 MIN: 2715.56 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
D: The test run did not produce a result.
Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
D: The test run did not produce a result.
Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
D: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU A B C D 300 600 900 1200 1500 1392.31 1385.11 1389.43 1390.76 MIN: 1386.61 MIN: 1379.59 MIN: 1383.24 MIN: 1384.5 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU A B C D 0.2445 0.489 0.7335 0.978 1.2225 1.08635 1.07920 1.08253 1.08663 MIN: 1.04 MIN: 1.04 MIN: 1.05 MIN: 1.05 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU A B C D 600 1200 1800 2400 3000 2724.19 2707.34 2717.02 2721.31 MIN: 2717.07 MIN: 2699.45 MIN: 2710.9 MIN: 2713.72 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU A B C D 300 600 900 1200 1500 1400.14 1388.94 1393.85 1393.64 MIN: 1392.75 MIN: 1383.27 MIN: 1387.96 MIN: 1387.06 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU A B C D 0.1907 0.3814 0.5721 0.7628 0.9535 0.847434 0.841664 0.840243 0.844385 MIN: 0.81 MIN: 0.81 MIN: 0.8 MIN: 0.81 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
D: The test run did not produce a result.
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K A B C D 2 4 6 8 10 7.61 7.69 7.66 7.67 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K A B C D 8 16 24 32 40 36.34 36.61 36.36 36.52 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K A B C D 4 8 12 16 20 14.43 14.67 14.52 14.53 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K A B C D 13 26 39 52 65 58.95 58.84 58.18 58.14 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K A B C D 20 40 60 80 100 80.78 81.58 81.46 81.04 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K A B C D 20 40 60 80 100 84.14 85.07 83.56 84.35 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p A B C D 0.144 0.288 0.432 0.576 0.72 0.64 0.64 0.64 0.64 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p A B C D 4 8 12 16 20 17.71 17.71 17.73 17.59 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p A B C D 20 40 60 80 100 74.81 73.60 74.15 73.14 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p A B C D 10 20 30 40 50 44.51 44.99 44.92 45.05 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p A B C D 30 60 90 120 150 156.10 155.18 154.69 155.73 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p A B C D 40 80 120 160 200 189.92 189.67 188.95 189.14 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p A B C D 40 80 120 160 200 199.11 200.46 194.65 198.99 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
QuadRay VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 4K A B C D 2 4 6 8 10 8.42 8.43 8.38 8.40 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 4K A B C D 0.5198 1.0396 1.5594 2.0792 2.599 2.30 2.31 2.31 2.30 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 4K A B C D 0.4455 0.891 1.3365 1.782 2.2275 1.96 1.98 1.97 1.98 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 4K A B C D 0.1193 0.2386 0.3579 0.4772 0.5965 0.53 0.53 0.53 0.53 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 1080p A B C D 0.4815 0.963 1.4445 1.926 2.4075 2.14 2.14 2.14 2.14 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency A B C D 0.0063 0.0126 0.0189 0.0252 0.0315 0.027 0.028 0.028 0.028 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1 - Mode: Read Write A B C D 500 1000 1500 2000 2500 2507 2488 2486 2484 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency A B C D 0.0907 0.1814 0.2721 0.3628 0.4535 0.399 0.402 0.402 0.403 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Only A B C D 70K 140K 210K 280K 350K 310942 311138 308393 314549 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency A B C D 0.0365 0.073 0.1095 0.146 0.1825 0.161 0.161 0.162 0.159 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 100 - Mode: Read Only A B C D 70K 140K 210K 280K 350K 307562 309250 307586 310891 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency A B C D 0.0731 0.1462 0.2193 0.2924 0.3655 0.325 0.323 0.325 0.322 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Write A B C D 700 1400 2100 2800 3500 3127 3131 3138 3133 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency A B C D 4 8 12 16 20 15.99 15.97 15.93 15.96 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1 - Mode: Read Only A B C D 8K 16K 24K 32K 40K 34362 33875 34948 35562 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency A B C D 0.0068 0.0136 0.0204 0.0272 0.034 0.029 0.030 0.029 0.028 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 100 - Mode: Read Write A B D 600 1200 1800 2400 3000 2861 2856 2873 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency A B D 8 16 24 32 40 34.95 35.02 34.81 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1 - Mode: Read Write A B D 500 1000 1500 2000 2500 2378 2400 2394 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency A B D 0.0947 0.1894 0.2841 0.3788 0.4735 0.421 0.417 0.418 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Only A B D 70K 140K 210K 280K 350K 299777 301135 305622 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency A B D 0.0376 0.0752 0.1128 0.1504 0.188 0.167 0.166 0.164 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Only A B D 60K 120K 180K 240K 300K 297424 299533 300952 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency A B D 0.0756 0.1512 0.2268 0.3024 0.378 0.336 0.334 0.332 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write A B D 8K 16K 24K 32K 40K 38140 37994 38108 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency A B D 0.2961 0.5922 0.8883 1.1844 1.4805 1.311 1.316 1.312 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Write A B D 9K 18K 27K 36K 45K 38997 39365 39700 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency A B D 0.5769 1.1538 1.7307 2.3076 2.8845 2.564 2.540 2.519 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
A Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa20120aPython Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 13 October 2022 11:07 by user pts.
B Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa20120aPython Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 13 October 2022 17:10 by user pts.
C Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa20120aPython Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 13 October 2022 19:59 by user pts.
D Processor: AMD Ryzen 7 5800X3D 8-Core @ 3.40GHz (8 Cores / 16 Threads), Motherboard: ASUS ROG CROSSHAIR VIII HERO (4201 BIOS), Chipset: AMD Starship/Matisse, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0 + 2000GB, Graphics: Intel DG2 8GB (2400MHz), Audio: Intel Device 4f90, Monitor: ASUS VP28U, Network: Realtek RTL8125 2.5GbE + Intel I211
OS: Ubuntu 22.04, Kernel: 5.15.47+prerelease3723 (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server 1.21.1.3 + Wayland, OpenGL: 4.6 Mesa 22.2.0-devel (git-44289c46d9), Vulkan: 1.3.219, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa20120aPython Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 13 October 2022 21:17 by user pts.