tr onednn 3.1 AMD Ryzen Threadripper 3990X 64-Core testing with a Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 23.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2303314-PTS-TRONEDNN36&grr&rdt .
tr onednn 3.1 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b c d AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads) Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS) AMD Starship/Matisse 128GB Samsung SSD 970 EVO Plus 500GB AMD Radeon RX 5700 8GB (1750/875MHz) AMD Navi 10 HDMI Audio DELL P2415Q Intel I211 + Intel Wi-Fi 6 AX200 Ubuntu 23.04 6.2.0-18-generic (x86_64) GNOME Shell 44.0 X Server + Wayland 4.6 Mesa 22.3.6 (LLVM 15.0.7 DRM 3.49) GCC 12.2.0 ext4 3840x2160 4.6 Mesa 23.0.1 (LLVM 15.0.7 DRM 3.49) OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301055 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
tr onednn 3.1 onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 1D - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU a b c d 4042.59 4024.21 4011.41 844.978 859.098 856.722 11.59889 3.69076 10.48951 3.48600 1.78620 6.47745 0.919370 6.48493 2.08690 0.989134 4007.14 4014.98 3998.12 864.209 844.462 858.404 3.60412 2.43237 10.9027 1.09122 1.76202 6.37674 0.923333 6.54503 2.07202 0.97877 4027.35 4008.56 4018.1 858.755 857.907 850.4 2.60751 2.42598 10.2384 1.13827 1.82592 8.33405 1.02783 6.6289 2.10876 0.964088 4010.35 4001.77 4017.57 841.923 839.176 862.304 2.33334 1.57043 9.81509 1.14615 1.74741 8.41506 0.967448 6.66997 2.08485 0.978819 OpenBenchmarking.org
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU a b c d 900 1800 2700 3600 4500 SE +/- 7.92, N = 3 4042.59 4007.14 4027.35 4010.35 MIN: 4009.52 MIN: 3981.44 MIN: 4005.47 MIN: 3987.69 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU a b c d 900 1800 2700 3600 4500 SE +/- 9.26, N = 3 4024.21 4014.98 4008.56 4001.77 MIN: 3990.89 MIN: 3992.86 MIN: 3987.06 MIN: 3979.09 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b c d 900 1800 2700 3600 4500 SE +/- 6.72, N = 3 4011.41 3998.12 4018.10 4017.57 MIN: 3977.24 MIN: 3974.85 MIN: 3995.65 MIN: 3992.22 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU a b c d 200 400 600 800 1000 SE +/- 3.68, N = 3 844.98 864.21 858.76 841.92 MIN: 824.01 MIN: 847.67 MIN: 842.97 MIN: 825.42 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b c d 200 400 600 800 1000 SE +/- 6.16, N = 3 859.10 844.46 857.91 839.18 MIN: 831.54 MIN: 828.46 MIN: 840.05 MIN: 821.06 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU a b c d 200 400 600 800 1000 SE +/- 8.46, N = 3 856.72 858.40 850.40 862.30 MIN: 825.43 MIN: 842.21 MIN: 834.24 MIN: 844.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU a b c d 3 6 9 12 15 SE +/- 1.62613, N = 12 11.59889 3.60412 2.60751 2.33334 MIN: 1.68 MIN: 2.35 MIN: 2.14 MIN: 1.99 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a b c d 0.8304 1.6608 2.4912 3.3216 4.152 SE +/- 0.06973, N = 12 3.69076 2.43237 2.42598 1.57043 MIN: 2.44 MIN: 1.98 MIN: 2 MIN: 1.39 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b c d 3 6 9 12 15 SE +/- 0.09699, N = 7 10.48951 10.90270 10.23840 9.81509 MIN: 8.19 MIN: 8.81 MIN: 8.5 MIN: 8.37 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU a b c d 0.7844 1.5688 2.3532 3.1376 3.922 SE +/- 0.52175, N = 15 3.48600 1.09122 1.13827 1.14615 MIN: 0.97 MIN: 0.99 MIN: 1.05 MIN: 1.04 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU a b c d 0.4108 0.8216 1.2324 1.6432 2.054 SE +/- 0.01827, N = 3 1.78620 1.76202 1.82592 1.74741 MIN: 1.47 MIN: 1.54 MIN: 1.5 MIN: 1.49 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b c d 2 4 6 8 10 SE +/- 0.01580, N = 3 6.47745 6.37674 8.33405 8.41506 MIN: 5.78 MIN: 6.23 MIN: 8.22 MIN: 8.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b c d 0.2313 0.4626 0.6939 0.9252 1.1565 SE +/- 0.002441, N = 3 0.919370 0.923333 1.027830 0.967448 MIN: 0.85 MIN: 0.86 MIN: 0.94 MIN: 0.89 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU a b c d 2 4 6 8 10 SE +/- 0.00405, N = 3 6.48493 6.54503 6.62890 6.66997 MIN: 6.38 MIN: 6.43 MIN: 6.52 MIN: 6.56 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b c d 0.4745 0.949 1.4235 1.898 2.3725 SE +/- 0.01000, N = 3 2.08690 2.07202 2.10876 2.08485 MIN: 2.03 MIN: 2.02 MIN: 2.03 MIN: 2.03 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.1 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b c d 0.2226 0.4452 0.6678 0.8904 1.113 SE +/- 0.001428, N = 3 0.989134 0.978770 0.964088 0.978819 MIN: 0.93 MIN: 0.92 MIN: 0.9 MIN: 0.92 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Phoronix Test Suite v10.8.5