oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit.
To run this test with the Phoronix Test Suite , the basic command is: phoronix-test-suite benchmark onednn .
Test Created 17 June 2020
Last Updated 15 October 2024
Test Type Processor
Average Install Time 5 Minutes, 35 Seconds
Average Run Time 2 Minutes, 16 Seconds
Test Dependencies C/C++ Compiler Toolchain + CMake
Accolades 90k+ Downloads Public Result Uploads * Reported Installs ** Reported Test Completions ** Test Profile Page Views *** OpenBenchmarking.org Events oneDNN Popularity Statistics pts/onednn 2020.06 2020.08 2020.10 2020.12 2021.02 2021.04 2021.06 2021.08 2021.10 2021.12 2022.02 2022.04 2022.06 2022.08 2022.10 2022.12 2023.02 2023.04 2023.06 2023.08 2023.10 2023.12 2024.02 2024.04 2024.06 2024.08 2024.10 2024.12 20K 40K 60K 80K 100K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly. ** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. *** Test profile page view reporting began March 2021. Data updated weekly as of 17 January 2025.
Deconvolution Batch shapes_1d 14.5% IP Shapes 3D 14.7% Recurrent Neural Network Training 14.9% Convolution Batch Shapes Auto 13.7% Deconvolution Batch shapes_3d 13.9% Recurrent Neural Network Inference 14.7% IP Shapes 1D 13.7% Harness Option Popularity OpenBenchmarking.org
Revision Historypts/onednn-3.6.0 [View Source ] Tue, 15 Oct 2024 14:53:25 GMT Update against oneDNN 3.6 upstream.
pts/onednn-3.4.0 [View Source ] Fri, 01 Mar 2024 13:02:43 GMT Update against oneDNN 3.4 upstream.
pts/onednn-3.3.0 [View Source ] Thu, 12 Oct 2023 11:14:07 GMT Update against oneDNN 3.3 upstream.
pts/onednn-3.1.0 [View Source ] Fri, 31 Mar 2023 18:14:37 GMT Update against oneDNN 3.1 upstream.
pts/onednn-3.0.0 [View Source ] Mon, 19 Dec 2022 21:07:39 GMT Update against oneDNN 3.0 upstream.
pts/onednn-2.7.0 [View Source ] Wed, 28 Sep 2022 13:00:44 GMT Update against oneDNN 2.7 upstream.
pts/onednn-1.8.0 [View Source ] Tue, 29 Mar 2022 19:55:25 GMT Update against oneDNN 2.6 upstream.
pts/onednn-1.7.0 [View Source ] Sat, 13 Mar 2021 07:49:33 GMT Update against oneDNN 2.1.2 upstream.
pts/onednn-1.6.1 [View Source ] Sun, 20 Dec 2020 09:58:16 GMT This test profile builds and works fine on macOS so enable it (MacOSX).
pts/onednn-1.6.0 [View Source ] Wed, 09 Dec 2020 13:47:31 GMT Update against oneDNN 2.0 upstream.
pts/onednn-1.5.0 [View Source ] Wed, 17 Jun 2020 16:26:39 GMT Initial commit of oneDNN test profile based on Intel oneDNN 1.5, forked from existing mkl-dnn test profile that is named from MKL-DNN before it was renamed to DNNL and then oneDNN. So create new test profile to match Intel naming convention.
Performance MetricsAnalyze Test Configuration: pts/onednn-3.6.x - Harness: Recurrent Neural Network Training - Engine: CPU pts/onednn-3.6.x - Harness: IP Shapes 3D - Engine: CPU pts/onednn-3.6.x - Harness: Recurrent Neural Network Inference - Engine: CPU pts/onednn-3.6.x - Harness: Deconvolution Batch shapes_1d - Engine: CPU pts/onednn-3.6.x - Harness: Deconvolution Batch shapes_3d - Engine: CPU pts/onednn-3.6.x - Harness: Convolution Batch Shapes Auto - Engine: CPU pts/onednn-3.6.x - Harness: IP Shapes 1D - Engine: CPU pts/onednn-3.4.x - Harness: Deconvolution Batch shapes_3d - Engine: CPU pts/onednn-3.4.x - Harness: IP Shapes 3D - Engine: CPU pts/onednn-3.4.x - Harness: Recurrent Neural Network Training - Engine: CPU pts/onednn-3.4.x - Harness: Convolution Batch Shapes Auto - Engine: CPU pts/onednn-3.4.x - Harness: Recurrent Neural Network Inference - Engine: CPU pts/onednn-3.4.x - Harness: Deconvolution Batch shapes_1d - Engine: CPU pts/onednn-3.4.x - Harness: IP Shapes 1D - Engine: CPU pts/onednn-3.3.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-3.3.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-3.3.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-3.3.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-3.3.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-3.3.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-3.3.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-3.3.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch All - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch 1D - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch All - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU oneDNN 3.6 Harness: Recurrent Neural Network Training - Engine: CPU OpenBenchmarking.org metrics for this test profile configuration based on 210 public results since 15 October 2024 with the latest data as of 18 January 2025 .
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Component
Percentile Rank
# Compatible Public Results
ms (Average)
Detailed Performance Overview OpenBenchmarking.org Distribution Of Public Results - Harness: Recurrent Neural Network Training - Engine: CPU 210 Results Range From 326 To 59466 ms 326 1509 2692 3875 5058 6241 7424 8607 9790 10973 12156 13339 14522 15705 16888 18071 19254 20437 21620 22803 23986 25169 26352 27535 28718 29901 31084 32267 33450 34633 35816 36999 38182 39365 40548 41731 42914 44097 45280 46463 47646 48829 50012 51195 52378 53561 54744 55927 57110 58293 59476 20 40 60 80 100
Based on OpenBenchmarking.org data, the selected test / test configuration (oneDNN 3.6 - Harness: Recurrent Neural Network Training - Engine: CPU ) has an average run-time of 5 minutes . By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
OpenBenchmarking.org Minutes Time Required To Complete Benchmark Harness: Recurrent Neural Network Training - Engine: CPU Run-Time 5 10 15 20 25 Min: 4 / Avg: 4.7 / Max: 18
Notable Instruction Set Usage Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.
Instruction Set
Support
Instructions Detected
Used by default on supported hardware. Found on Intel processors since Sandy Bridge (2011). Found on AMD processors since Bulldozer (2011).
VZEROUPPER VBROADCASTSS VINSERTF128 VPERMILPS VBROADCASTSD VEXTRACTF128 VPERMILPD VPERM2F128 VMASKMOVPS
Used by default on supported hardware. Found on Intel processors since Haswell (2013). Found on AMD processors since Excavator (2016).
VPBROADCASTQ VINSERTI128 VPBROADCASTD VPBLENDD VPSLLVD VEXTRACTI128 VPSRAVD VPERM2I128 VPGATHERQQ VGATHERQPS VPERMQ VPBROADCASTW VPSRLVQ VPBROADCASTB VPGATHERDQ VPGATHERQD VPSLLVQ VPMASKMOVQ VPERMD
Used by default on supported hardware. Found on Intel processors since Haswell (2013). Found on AMD processors since Bulldozer (2011).
VFMADD231SS VFMADD213SS VFMADD132SS VFMADD132SD VFMADD132PS VFMADD231PS VFMADD213PS VFNMADD132PS VFNMSUB231PS VFNMSUB132SS VFNMADD132SS VFNMSUB231SS VFNMADD231PS VFNMADD231SS VFNMADD213SS VFMADD231SD VFMSUB132SS VFMADD132PD VFMADD231PD VFMADD213PD VFMSUB231SS VFMSUB231SD
Advanced Vector Extensions 512 (AVX512)
Requires passing a supported compiler/build flag (verified with targets: cascadelake, sapphirerapids).
(ZMM REGISTER USE)
The test / benchmark does honor compiler flag changes.
Last automated analysis: 16 October 2024
This test profile binary relies on the shared libraries libdnnl.so.3, libm.so.6, libgomp.so.1, libc.so.6 .
Tested CPU Architectures This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.
CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
ARMv8 64-bit
aarch64
ARMv8 Neoverse-N1 128-Core, ARMv8 Neoverse-V2 72-Core
Recent Test Results
2 Systems - 188 Benchmark Results
2 x AMD EPYC 9654 96-Core - AMD Titanite_4G - AMD Device 14a4
Ubuntu 23.10 - 6.9.0-060900rc1daily20240327-generic - GCC 13.2.0
2 Systems - 267 Benchmark Results
2 x AMD EPYC 9124 16-Core - AMD Titanite_4G - AMD Device 14a4
Ubuntu 23.10 - 6.9.0-060900rc1daily20240327-generic - GCC 13.2.0
7 Systems - 216 Benchmark Results
AMD Ryzen 5 2600 Six-Core - Gigabyte X470 AORUS ULTRA GAMING-CF - AMD 17h
Ubuntu 23.10 - 6.5.0-44-generic - KDE Plasma 5.27.10
7 Systems - 216 Benchmark Results
AMD Ryzen 5 2600 Six-Core - Gigabyte X470 AORUS ULTRA GAMING-CF - AMD 17h
Ubuntu 23.10 - 6.5.0-44-generic - KDE Plasma 5.27.10
7 Systems - 216 Benchmark Results
AMD Ryzen 7 5700X 8-Core - Gigabyte X470 AORUS ULTRA GAMING-CF - AMD Starship
Ubuntu 23.10 - 6.5.0-44-generic - X Server 1.21.1.7
6 Systems - 216 Benchmark Results
AMD Ryzen 7 5700X 8-Core - Gigabyte X470 AORUS ULTRA GAMING-CF - AMD Starship
Ubuntu 23.10 - 6.5.0-44-generic - X Server 1.21.1.7
6 Systems - 216 Benchmark Results
AMD Ryzen 5 2600 Six-Core - Gigabyte X470 AORUS ULTRA GAMING-CF - AMD 17h
Ubuntu 23.10 - 6.5.0-44-generic - X Server 1.21.1.7
6 Systems - 216 Benchmark Results
AMD Ryzen 5 2600 Six-Core - Gigabyte X470 AORUS ULTRA GAMING-CF - AMD 17h
Ubuntu 23.10 - 6.5.0-44-generic - X Server 1.21.1.7
5 Systems - 216 Benchmark Results
AMD Ryzen 5 2600 Six-Core - Gigabyte X470 AORUS ULTRA GAMING-CF - AMD 17h
Ubuntu 23.10 - 6.5.0-44-generic - X Server 1.21.1.7
5 Systems - 214 Benchmark Results
AMD Ryzen 5 2600 Six-Core - Gigabyte X470 AORUS ULTRA GAMING-CF - AMD 17h
Ubuntu 23.10 - 6.5.0-44-generic - X Server 1.21.1.7
5 Systems - 211 Benchmark Results
AMD Ryzen 5 2600 Six-Core - Gigabyte X470 AORUS ULTRA GAMING-CF - AMD 17h
Ubuntu 23.10 - 6.5.0-44-generic - X Server 1.21.1.7
5 Systems - 205 Benchmark Results
AMD Ryzen 5 2600 Six-Core - Gigabyte X470 AORUS ULTRA GAMING-CF - AMD 17h
Ubuntu 23.10 - 6.5.0-44-generic - X Server 1.21.1.7
5 Systems - 191 Benchmark Results
AMD Ryzen 5 2600 Six-Core - Gigabyte X470 AORUS ULTRA GAMING-CF - AMD 17h
Ubuntu 23.10 - 6.5.0-44-generic - X Server 1.21.1.7
5 Systems - 190 Benchmark Results
AMD Ryzen 5 2600 Six-Core - Gigabyte X470 AORUS ULTRA GAMING-CF - AMD 17h
Ubuntu 23.10 - 6.5.0-44-generic - X Server 1.21.1.7
5 Systems - 184 Benchmark Results
AMD Ryzen 5 2600 Six-Core - Gigabyte X470 AORUS ULTRA GAMING-CF - AMD 17h
Ubuntu 23.10 - 6.5.0-44-generic - X Server 1.21.1.7
Most Popular Test Results
4 Systems - 24 Benchmark Results
AMD Ryzen Threadripper 7980X 64-Cores - System76 Thelio Major - AMD Device 14a4
Ubuntu 24.10 - 6.11.0-8-generic - GNOME Shell 47.0
4 Systems - 24 Benchmark Results
ARMv8 Neoverse-V2 - Pegatron JIMBO P4352 - 1 x 480GB LPDDR5-6400MT
Ubuntu 24.04 - 6.8.0-45-generic-64k - NVIDIA
5 Systems - 33 Benchmark Results
Intel Core Ultra 7 155H - MTL Swift SFG14-72T Coral_MTH - Intel Device 7e7f
Ubuntu 24.10 - 6.11.0-rc6-phx - GNOME Shell 47.0
3 Systems - 24 Benchmark Results
2 x AMD EPYC 9575F 64-Core - AMD VOLCANO - AMD Device 153a
Ubuntu 24.04 - 6.8.12-powercap-1ah-patched - GCC 13.2.0
4 Systems - 24 Benchmark Results
AMD Ryzen Threadripper 3970X 32-Core - ASUS ROG ZENITH II EXTREME - AMD Starship
Ubuntu 22.04 - 6.8.0-45-generic - GNOME Shell 42.9
3 Systems - 55 Benchmark Results
Intel Core i9-10980XE - ASRock X299 Steel Legend - Intel Sky Lake-E DMI3 Registers
Ubuntu 22.04 - 6.8.0-40-generic - GNOME Shell 42.9
8 Systems - 27 Benchmark Results
Intel Xeon 6980P - Intel AvenueCity v0.01 - Intel Ice Lake IEH
Ubuntu 24.04 - 6.10.0-phx - GCC 13.2.0
3 Systems - 60 Benchmark Results
AMD Ryzen 9 3900XT 12-Core - MSI MEG X570 GODLIKE - AMD Starship
Ubuntu 22.04 - 6.8.0-47-generic - GNOME Shell 42.9
4 Systems - 24 Benchmark Results
Intel Core Ultra 7 256V - ASUS Zenbook S 14 UX5406SA_UX5406SA UX5406SA v1.0 - Intel Device a87f
Ubuntu 24.10 - 6.11.0-8-generic - GNOME Shell 47.0
2 Systems - 27 Benchmark Results
2 x AMD EPYC 9755 128-Core - AMD VOLCANO - AMD Device 153a
Ubuntu 24.04 - 6.12.0-rc3-phx - GCC 13.2.0 + Clang 18.1.3
4 Systems - 24 Benchmark Results
AMD Ryzen 9 9950X 16-Core - ASUS ROG STRIX X670E-E GAMING WIFI - AMD Device 14d8
Ubuntu 24.04 - 6.10.1-061001-generic - GNOME Shell 46.0
2 Systems - 40 Benchmark Results
Intel Core i9-14900K - ASUS PRIME Z790-P WIFI - Intel Raptor Lake-S PCH
Ubuntu 24.04 - 6.10.0-061000rc6daily20240706-generic - GNOME Shell 46.0
Featured Kernel Comparison
AMD Ryzen 7 7840HS - Framework Laptop 16 - AMD Device 14e8
Ubuntu 24.04 - 6.8.0-40-generic - GNOME Shell 46.0
3 Systems - 24 Benchmark Results
AMD Ryzen AI 9 365 - ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 - AMD Device 1507
Ubuntu 24.10 - 6.11.0-rc6-phx - GNOME Shell 47.0
Featured Graphics Comparison
AMD Ryzen 7 7840U - PHX Swift SFE16-43 Ray_PEU - AMD Device 14e8
Ubuntu 24.10 - 6.11.0-9-generic - GNOME Shell 47.0