AMD EPYC 7763 Cooling Performance

AMD EPYC 7763 64-Core CPU benchmarks by Michael Larabel evaluating some heatsink fans in a 4U server.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2104096-IB-HEATSINK430
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

AV1 2 Tests
Timed Code Compilation 6 Tests
C/C++ Compiler Tests 7 Tests
CPU Massive 10 Tests
Creator Workloads 10 Tests
Encoding 4 Tests
Game Development 2 Tests
HPC - High Performance Computing 5 Tests
Machine Learning 2 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 2 Tests
Multi-Core 17 Tests
NVIDIA GPU Compute 5 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 7 Tests
Python Tests 3 Tests
Renderers 3 Tests
Scientific Computing 3 Tests
Software Defined Radio 4 Tests
Server CPU Tests 8 Tests
Video Encoding 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Noctua NH-U9 TR4-SP3
April 08 2021
  8 Hours, 12 Minutes
Dynatron A26
April 09 2021
  8 Hours, 45 Minutes
Dynatron A38
April 09 2021
  11 Hours, 17 Minutes
Invert Hiding All Results Option
  9 Hours, 25 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC 7763 Cooling PerformanceOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads)Supermicro H12SSL-i v1.01 (2.0 BIOS)AMD Starship/Matisse126GB3841GB Micron_9300_MTFDHAL3T8TDPllvmpipe2 x Broadcom NetXtreme BCM5720 2-port PCIeUbuntu 20.045.12.0-051200rc6daily20210408-generic (x86_64) 20210407GNOME Shell 3.36.4X Server 1.20.83.3 Mesa 20.0.8 (LLVM 10.0.0 128 bits)GCC 9.3.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionAMD EPYC 7763 Cooling Performance BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0xa001119 - Python 3.8.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Noctua NH-U9 TR4-SP3Dynatron A26Dynatron A38Result OverviewPhoronix Test Suite100%101%101%102%StockfishXcompact3d Incompact3dTimed Erlang/OTP CompilationChaos Group V-RAYViennaCLOpenSCADMobile Neural NetworkTimed Node.js CompilationASTC EncoderTimed GDB GNU Debugger CompilationLuaRadioGROMACSIndigoBenchAOM AV1Timed Apache CompilationSVT-AV1Timed Linux Kernel CompilationNAMDsimdjsonsrsLTEGNU RadioSVT-HEVCLiquid-DSPGNU GMP GMPbenchTimed Mesa CompilationSVT-VP9BlenderoneDNN

AMD EPYC 7763 Cooling Performanceincompact3d: X3D-benchmarking input.i3donednn: IP Shapes 3D - u8s8f32 - CPUaom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 4 Two-Pass - Bosphorus 4Kstockfish: Total Timeaom-av1: Speed 6 Two-Pass - Bosphorus 4Kviennacl: CPU BLAS - sAXPYsrslte: OFDM_Testonednn: Recurrent Neural Network Training - f32 - CPUaom-av1: Speed 8 Realtime - Bosphorus 4Kviennacl: CPU BLAS - dGEMV-Taom-av1: Speed 8 Realtime - Bosphorus 1080pincompact3d: input.i3d 193 Cells Per Directionaom-av1: Speed 9 Realtime - Bosphorus 4Kmnn: SqueezeNetV1.0viennacl: CPU BLAS - dAXPYonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUbuild-erlang: Time To Compilegnuradio: Five Back to Back FIR Filtersopenscad: Mini-ITX Casesvt-av1: Enc Mode 4 - 1080ponednn: IP Shapes 3D - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUmnn: resnet-v2-50luaradio: Five Back to Back FIR Filtersopenscad: Leonardo Phone Case Slimonednn: Recurrent Neural Network Training - u8s8f32 - CPUopenscad: Pistolv-ray: CPUaom-av1: Speed 6 Realtime - Bosphorus 4Kmnn: inception-v3onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUopenscad: Retro Caraom-av1: Speed 4 Two-Pass - Bosphorus 1080pastcenc: Exhaustiveaom-av1: Speed 6 Realtime - Bosphorus 1080pmnn: MobileNetV2_224onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUsvt-av1: Enc Mode 8 - 1080psimdjson: DistinctUserIDaom-av1: Speed 6 Two-Pass - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080pviennacl: CPU BLAS - dCOPYindigobench: CPU - Supercaronednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUgnuradio: FM Deemphasis Filtergnuradio: IIR Filtersrslte: PHY_DL_Testgnuradio: Signal Source (Cosine)viennacl: CPU BLAS - sDOTonednn: Deconvolution Batch shapes_3d - f32 - CPUsvt-hevc: 7 - Bosphorus 1080ponednn: Deconvolution Batch shapes_1d - f32 - CPUmnn: mobilenet-v1-1.0srslte: PHY_DL_Testviennacl: CPU BLAS - dGEMM-NTviennacl: CPU BLAS - dGEMM-NNbuild-nodejs: Time To Compilegnuradio: Hilbert Transformbuild-gdb: Time To Compileastcenc: Mediumliquid-dsp: 16 - 256 - 57gnuradio: FIR Filterblender: Fishy Cat - CPU-Onlysvt-hevc: 1 - Bosphorus 1080pgromacs: water_GMX50_bareliquid-dsp: 64 - 256 - 57luaradio: FM Deemphasis Filterbuild-apache: Time To Compileviennacl: CPU BLAS - dDOTliquid-dsp: 128 - 256 - 57build-linux-kernel: Time To Compileopenscad: Projector Mount Swivelonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUnamd: ATPase Simulation - 327,506 Atomssimdjson: PartialTweetsonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: IP Shapes 1D - f32 - CPUviennacl: CPU BLAS - dGEMM-TTblender: BMW27 - CPU-Onlyviennacl: CPU BLAS - dGEMM-TNastcenc: Thoroughgmpbench: Total Timebuild-mesa: Time To Compilesvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080pblender: Pabellon Barcelona - CPU-Onlyluaradio: Complex Phaseblender: Classroom - CPU-Onlyonednn: Recurrent Neural Network Inference - f32 - CPUincompact3d: input.i3d 129 Cells Per Directionblender: Barbershop - CPU-Onlyluaradio: Hilbert Transformindigobench: CPU - Bedroomliquid-dsp: 32 - 256 - 57onednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUsimdjson: Kostyasimdjson: LargeRandsvt-av1: Enc Mode 0 - 1080paom-av1: Speed 0 Two-Pass - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 1080pviennacl: CPU BLAS - dGEMV-Nviennacl: CPU BLAS - sCOPYsvt-vp9: Visual Quality Optimized - Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38625.8123370.663059104.574.721566566859.156271156333331373.9134.179386.1322.509648737.455.78512221370.82134.328560.945.8419.4733.663071.6522022.1301101.618.6991385.32109.8335791216.0928.1710.38492919.0636.7420.442824.613.7570.7209281.1775892.8614.0121.31607.02140124.1900.605058765.3608.694.33322.26373.03820324.427.136082.328257.486.188.1110.706378.298.8144.9176803310000639.045.9837.765.5772792400000344.723.5901115301793333326.930100.9020.7818510.381103.64664.6801.1779689.731.9292.17.99255098.819.835471.60468.7693.59591.880.96665.2285.15653072111.6193.511.3941613766667664.9880.8800472.830.960.1300.20.588.61035345.24667.2684940.650624101.704.831601076519.166401162333331378.8634.2978186.5622.330851238.065.79712051391.14132.497567.745.2329.4523.649411.6567122.3971110.618.5251369.98108.6175827016.2528.4510.38149518.8866.7120.619624.713.7890.7267891.1868592.3474.0021.47602.55140924.2560.607655760.3604.794.33324.76403.02459323.607.178392.342257.186.688.6111.318377.399.3264.9412800190000641.645.8037.895.5822782866667344.223.6731116302806666726.877100.7230.7834640.381643.63666.4831.1786389.931.8592.18.00625099.219.796471.09468.5993.72591.680.86666.0495.15065159111.4893.611.3991614933333664.8070.8797522.830.960.1300.20.5082.61052347.78627.6671140.640331101.274.771585120049.356401178666671400.1934.7377987.6322.708764437.465.87812041381.77133.119560.145.4829.3553.617621.6725622.2691097.418.7341377.49108.7945850416.1428.3280.38520718.9276.6820.559124.823.7800.7215861.1790693.0483.9821.46605.97139924.3600.603621764.1607.093.73303.66363.04352322.437.180022.332255.986.388.2110.941376.299.0864.9424804146667642.046.0137.925.5992793733333343.423.5861112302813333326.844101.0450.7810600.382153.63666.1301.1808789.931.9291.97.98985089.119.800470.69467.9693.72591.080.92665.5265.15051767111.5393.511.4041614566667665.2650.8794802.830.960.1300.20.5078.91044346.82OpenBenchmarking.org

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dNoctua NH-U9 TR4-SP3Dynatron A38Dynatron A26140280420560700SE +/- 0.48, N = 3SE +/- 0.23, N = 3SE +/- 11.65, N = 9625.81627.67667.271. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUDynatron A38Dynatron A26Noctua NH-U9 TR4-SP30.14920.29840.44760.59680.746SE +/- 0.005445, N = 5SE +/- 0.004042, N = 5SE +/- 0.006455, N = 50.6403310.6506240.663059MIN: 0.58MIN: 0.59MIN: 0.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 1.05, N = 7SE +/- 1.09, N = 6SE +/- 0.67, N = 6104.57101.70101.271. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KDynatron A26Dynatron A38Noctua NH-U9 TR4-SP31.08682.17363.26044.34725.434SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 34.834.774.721. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total TimeDynatron A26Dynatron A38Noctua NH-U9 TR4-SP330M60M90M120M150MSE +/- 2061799.88, N = 15SE +/- 1246901.76, N = 3SE +/- 2161918.89, N = 41601076511585120041566566851. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KDynatron A38Dynatron A26Noctua NH-U9 TR4-SP33691215SE +/- 0.11, N = 6SE +/- 0.10, N = 3SE +/- 0.07, N = 39.359.169.151. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPYDynatron A38Dynatron A26Noctua NH-U9 TR4-SP3140280420560700SE +/- 1.86, N = 15SE +/- 2.67, N = 14SE +/- 2.32, N = 156406406271. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsLTE 20.10.1Test: OFDM_TestDynatron A38Dynatron A26Noctua NH-U9 TR4-SP330M60M90M120M150MSE +/- 1178039.80, N = 3SE +/- 1902921.73, N = 3SE +/- 1770436.23, N = 31178666671162333331156333331. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3830060090012001500SE +/- 3.07, N = 3SE +/- 9.11, N = 3SE +/- 14.96, N = 31373.911378.861400.19MIN: 1332.45MIN: 1326.59MIN: 1335.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KDynatron A38Dynatron A26Noctua NH-U9 TR4-SP3816243240SE +/- 0.28, N = 3SE +/- 0.22, N = 3SE +/- 0.07, N = 334.7334.2934.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-TNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A382004006008001000SE +/- 1.64, N = 15SE +/- 1.78, N = 14SE +/- 3.44, N = 157937817791. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pDynatron A38Dynatron A26Noctua NH-U9 TR4-SP320406080100SE +/- 0.58, N = 6SE +/- 0.59, N = 6SE +/- 0.36, N = 687.6386.5686.131. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionDynatron A26Noctua NH-U9 TR4-SP3Dynatron A38510152025SE +/- 0.30, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 322.3322.5122.711. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KDynatron A26Dynatron A38Noctua NH-U9 TR4-SP3918273645SE +/- 0.46, N = 6SE +/- 0.48, N = 5SE +/- 0.43, N = 338.0637.4637.451. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: SqueezeNetV1.0Noctua NH-U9 TR4-SP3Dynatron A26Dynatron A381.32262.64523.96785.29046.613SE +/- 0.039, N = 3SE +/- 0.027, N = 3SE +/- 0.015, N = 35.7855.7975.878MIN: 5.58 / MAX: 6.64MIN: 5.54 / MAX: 7.52MIN: 5.64 / MAX: 6.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPYNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3830060090012001500SE +/- 2.23, N = 15SE +/- 2.28, N = 14SE +/- 1.31, N = 151222120512041. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A38Dynatron A2630060090012001500SE +/- 8.02, N = 3SE +/- 8.30, N = 3SE +/- 4.55, N = 31370.821381.771391.14MIN: 1322.76MIN: 1330.53MIN: 1343.151. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 23.2Time To CompileDynatron A26Dynatron A38Noctua NH-U9 TR4-SP3306090120150SE +/- 0.39, N = 3SE +/- 0.29, N = 3SE +/- 0.26, N = 3132.50133.12134.33

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersDynatron A26Noctua NH-U9 TR4-SP3Dynatron A38120240360480600SE +/- 5.98, N = 3SE +/- 8.06, N = 4SE +/- 5.99, N = 9567.7560.9560.11. 3.8.1.0

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Mini-ITX CaseDynatron A26Dynatron A38Noctua NH-U9 TR4-SP31020304050SE +/- 0.17, N = 3SE +/- 0.17, N = 3SE +/- 0.06, N = 345.2345.4845.841. OpenSCAD version 2019.05

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A383691215SE +/- 0.108, N = 4SE +/- 0.083, N = 4SE +/- 0.114, N = 69.4739.4529.3551. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUDynatron A38Dynatron A26Noctua NH-U9 TR4-SP30.82421.64842.47263.29684.121SE +/- 0.03350, N = 5SE +/- 0.02260, N = 5SE +/- 0.04102, N = 53.617623.649413.66307MIN: 3.36MIN: 3.4MIN: 3.371. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.37630.75261.12891.50521.8815SE +/- 0.00301, N = 7SE +/- 0.00304, N = 7SE +/- 0.01439, N = 71.652201.656711.67256MIN: 1.57MIN: 1.57MIN: 1.571. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: resnet-v2-50Noctua NH-U9 TR4-SP3Dynatron A38Dynatron A26510152025SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.09, N = 322.1322.2722.40MIN: 21.58 / MAX: 32.33MIN: 21.57 / MAX: 30.86MIN: 21.64 / MAX: 41.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersDynatron A26Noctua NH-U9 TR4-SP3Dynatron A382004006008001000SE +/- 1.05, N = 3SE +/- 3.75, N = 3SE +/- 2.63, N = 31110.61101.61097.4

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Leonardo Phone Case SlimDynatron A26Noctua NH-U9 TR4-SP3Dynatron A38510152025SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 318.5318.7018.731. OpenSCAD version 2019.05

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUDynatron A26Dynatron A38Noctua NH-U9 TR4-SP330060090012001500SE +/- 3.15, N = 3SE +/- 4.57, N = 3SE +/- 3.58, N = 31369.981377.491385.32MIN: 1325.96MIN: 1335.99MIN: 1350.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: PistolDynatron A26Dynatron A38Noctua NH-U9 TR4-SP320406080100SE +/- 0.10, N = 3SE +/- 0.31, N = 3SE +/- 0.20, N = 3108.62108.79109.831. OpenSCAD version 2019.05

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUDynatron A38Dynatron A26Noctua NH-U9 TR4-SP313K26K39K52K65KSE +/- 477.68, N = 3SE +/- 814.01, N = 3SE +/- 265.36, N = 3585045827057912

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KDynatron A26Dynatron A38Noctua NH-U9 TR4-SP348121620SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 316.2516.1416.091. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3Noctua NH-U9 TR4-SP3Dynatron A38Dynatron A26714212835SE +/- 0.03, N = 3SE +/- 0.10, N = 3SE +/- 0.13, N = 328.1728.3328.45MIN: 27.06 / MAX: 43.23MIN: 27.14 / MAX: 44.36MIN: 27.16 / MAX: 42.861. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUDynatron A26Noctua NH-U9 TR4-SP3Dynatron A380.08670.17340.26010.34680.4335SE +/- 0.001495, N = 4SE +/- 0.005034, N = 4SE +/- 0.004169, N = 40.3814950.3849290.385207MIN: 0.36MIN: 0.36MIN: 0.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Retro CarDynatron A26Dynatron A38Noctua NH-U9 TR4-SP3510152025SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 318.8918.9319.061. OpenSCAD version 2019.05

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 36.746.716.681. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveNoctua NH-U9 TR4-SP3Dynatron A38Dynatron A26510152025SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 320.4420.5620.621. (CXX) g++ options: -O3 -flto -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pDynatron A38Dynatron A26Noctua NH-U9 TR4-SP3612182430SE +/- 0.10, N = 3SE +/- 0.18, N = 3SE +/- 0.05, N = 324.8224.7124.611. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: MobileNetV2_224Noctua NH-U9 TR4-SP3Dynatron A38Dynatron A260.85251.7052.55753.414.2625SE +/- 0.017, N = 3SE +/- 0.016, N = 3SE +/- 0.021, N = 33.7573.7803.789MIN: 3.63 / MAX: 6.09MIN: 3.66 / MAX: 6.67MIN: 3.66 / MAX: 4.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A38Dynatron A260.16350.3270.49050.6540.8175SE +/- 0.002487, N = 4SE +/- 0.001725, N = 4SE +/- 0.001153, N = 40.7209280.7215860.726789MIN: 0.67MIN: 0.67MIN: 0.671. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A38Dynatron A260.2670.5340.8011.0681.335SE +/- 0.00777, N = 4SE +/- 0.01187, N = 4SE +/- 0.00760, N = 41.177581.179061.18685MIN: 1MIN: 0.99MIN: 0.981. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pDynatron A38Noctua NH-U9 TR4-SP3Dynatron A2620406080100SE +/- 0.62, N = 6SE +/- 0.39, N = 6SE +/- 0.38, N = 693.0592.8692.351. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.90231.80462.70693.60924.5115SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.014.003.981. (CXX) g++ options: -O3 -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pDynatron A26Dynatron A38Noctua NH-U9 TR4-SP3510152025SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 321.4721.4621.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A38Dynatron A26130260390520650SE +/- 1.56, N = 12SE +/- 0.85, N = 12SE +/- 1.45, N = 12607.02605.97602.551. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPYDynatron A26Noctua NH-U9 TR4-SP3Dynatron A3830060090012001500SE +/- 3.55, N = 14SE +/- 5.47, N = 15SE +/- 3.16, N = 151409140113991. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarDynatron A38Dynatron A26Noctua NH-U9 TR4-SP3612182430SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 324.3624.2624.19

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUDynatron A38Noctua NH-U9 TR4-SP3Dynatron A260.13670.27340.41010.54680.6835SE +/- 0.000705, N = 3SE +/- 0.001828, N = 3SE +/- 0.001607, N = 30.6036210.6050580.607655MIN: 0.56MIN: 0.56MIN: 0.561. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterNoctua NH-U9 TR4-SP3Dynatron A38Dynatron A26160320480640800SE +/- 1.86, N = 4SE +/- 1.58, N = 9SE +/- 1.05, N = 3765.3764.1760.31. 3.8.1.0

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterNoctua NH-U9 TR4-SP3Dynatron A38Dynatron A26130260390520650SE +/- 1.08, N = 4SE +/- 1.20, N = 9SE +/- 1.25, N = 3608.6607.0604.71. 3.8.1.0

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_TestDynatron A26Noctua NH-U9 TR4-SP3Dynatron A3820406080100SE +/- 0.23, N = 3SE +/- 0.46, N = 3SE +/- 0.52, N = 394.394.393.71. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)Dynatron A26Noctua NH-U9 TR4-SP3Dynatron A387001400210028003500SE +/- 26.65, N = 3SE +/- 6.28, N = 4SE +/- 17.69, N = 93324.73322.23303.61. 3.8.1.0

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOTDynatron A26Noctua NH-U9 TR4-SP3Dynatron A38140280420560700SE +/- 1.19, N = 14SE +/- 0.77, N = 14SE +/- 1.17, N = 136406376361. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUDynatron A26Noctua NH-U9 TR4-SP3Dynatron A380.68481.36962.05442.73923.424SE +/- 0.00911, N = 9SE +/- 0.00904, N = 9SE +/- 0.00693, N = 93.024593.038203.04352MIN: 2.24MIN: 2.21MIN: 2.341. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3870140210280350SE +/- 1.48, N = 10SE +/- 0.66, N = 10SE +/- 0.82, N = 10324.42323.60322.431. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38246810SE +/- 0.03799, N = 3SE +/- 0.02988, N = 3SE +/- 0.01979, N = 37.136087.178397.18002MIN: 6.04MIN: 6.17MIN: 6.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: mobilenet-v1-1.0Noctua NH-U9 TR4-SP3Dynatron A38Dynatron A260.5271.0541.5812.1082.635SE +/- 0.010, N = 3SE +/- 0.014, N = 3SE +/- 0.012, N = 32.3282.3322.342MIN: 2.28 / MAX: 2.55MIN: 2.28 / MAX: 2.65MIN: 2.29 / MAX: 2.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_TestNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3860120180240300SE +/- 0.22, N = 3SE +/- 0.18, N = 3SE +/- 0.84, N = 3257.4257.1255.91. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NTDynatron A26Dynatron A38Noctua NH-U9 TR4-SP320406080100SE +/- 0.06, N = 14SE +/- 0.30, N = 15SE +/- 0.28, N = 1586.686.386.11. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NNDynatron A26Dynatron A38Noctua NH-U9 TR4-SP320406080100SE +/- 0.08, N = 14SE +/- 0.34, N = 15SE +/- 0.52, N = 1588.688.288.11. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 15.11Time To CompileNoctua NH-U9 TR4-SP3Dynatron A38Dynatron A2620406080100SE +/- 0.28, N = 3SE +/- 0.30, N = 3SE +/- 0.11, N = 3110.71110.94111.32

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3880160240320400SE +/- 1.21, N = 4SE +/- 1.82, N = 3SE +/- 0.76, N = 9378.2377.3376.21. 3.8.1.0

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileNoctua NH-U9 TR4-SP3Dynatron A38Dynatron A2620406080100SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.12, N = 398.8199.0999.33

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A381.1122.2243.3364.4485.56SE +/- 0.0039, N = 7SE +/- 0.0032, N = 7SE +/- 0.0054, N = 74.91764.94124.94241. (CXX) g++ options: -O3 -flto -pthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Dynatron A38Noctua NH-U9 TR4-SP3Dynatron A26200M400M600M800M1000MSE +/- 2740148.01, N = 3SE +/- 6476302.96, N = 3SE +/- 5535858.86, N = 38041466678033100008001900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterDynatron A38Dynatron A26Noctua NH-U9 TR4-SP3140280420560700SE +/- 0.99, N = 9SE +/- 1.06, N = 3SE +/- 0.72, N = 4642.0641.6639.01. 3.8.1.0

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Fishy Cat - Compute: CPU-OnlyDynatron A26Noctua NH-U9 TR4-SP3Dynatron A381020304050SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 345.8045.9846.01

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pDynatron A38Dynatron A26Noctua NH-U9 TR4-SP3918273645SE +/- 0.10, N = 4SE +/- 0.12, N = 4SE +/- 0.05, N = 437.9237.8937.761. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021Input: water_GMX50_bareDynatron A38Dynatron A26Noctua NH-U9 TR4-SP31.25982.51963.77945.03926.299SE +/- 0.010, N = 3SE +/- 0.018, N = 3SE +/- 0.003, N = 35.5995.5825.5771. (CXX) g++ options: -O3 -pthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 64 - Buffer Length: 256 - Filter Length: 57Dynatron A38Noctua NH-U9 TR4-SP3Dynatron A26600M1200M1800M2400M3000MSE +/- 3347304.06, N = 3SE +/- 5372460.64, N = 3SE +/- 3268196.92, N = 32793733333279240000027828666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3870140210280350SE +/- 0.20, N = 3SE +/- 0.22, N = 3SE +/- 0.41, N = 3344.7344.2343.4

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileDynatron A38Noctua NH-U9 TR4-SP3Dynatron A26612182430SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 323.5923.5923.67

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOTDynatron A26Noctua NH-U9 TR4-SP3Dynatron A382004006008001000SE +/- 1.73, N = 14SE +/- 2.15, N = 15SE +/- 2.00, N = 151116111511121. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 128 - Buffer Length: 256 - Filter Length: 57Dynatron A38Dynatron A26Noctua NH-U9 TR4-SP3600M1200M1800M2400M3000MSE +/- 266666.67, N = 3SE +/- 448454.13, N = 3SE +/- 2577035.33, N = 33028133333302806666730179333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.10.20Time To CompileDynatron A38Dynatron A26Noctua NH-U9 TR4-SP3612182430SE +/- 0.26, N = 9SE +/- 0.26, N = 9SE +/- 0.28, N = 826.8426.8826.93

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Projector Mount SwivelDynatron A26Noctua NH-U9 TR4-SP3Dynatron A3820406080100SE +/- 0.42, N = 3SE +/- 0.29, N = 3SE +/- 0.14, N = 3100.72100.90101.051. OpenSCAD version 2019.05

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUDynatron A38Noctua NH-U9 TR4-SP3Dynatron A260.17630.35260.52890.70520.8815SE +/- 0.002274, N = 9SE +/- 0.002596, N = 9SE +/- 0.002597, N = 90.7810600.7818510.783464MIN: 0.72MIN: 0.72MIN: 0.721. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.0860.1720.2580.3440.43SE +/- 0.00051, N = 3SE +/- 0.00041, N = 3SE +/- 0.00076, N = 30.381100.381640.38215

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsNoctua NH-U9 TR4-SP3Dynatron A38Dynatron A260.8191.6382.4573.2764.095SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.643.633.631. (CXX) g++ options: -O3 -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A38Dynatron A26140280420560700SE +/- 1.52, N = 3SE +/- 1.16, N = 3SE +/- 1.24, N = 3664.68666.13666.48MIN: 637.44MIN: 639.05MIN: 640.161. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A380.26570.53140.79711.06281.3285SE +/- 0.00196, N = 4SE +/- 0.00176, N = 4SE +/- 0.00303, N = 41.177961.178631.18087MIN: 1.1MIN: 1.12MIN: 1.11. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TTDynatron A38Dynatron A26Noctua NH-U9 TR4-SP320406080100SE +/- 0.05, N = 15SE +/- 0.02, N = 14SE +/- 0.13, N = 1589.989.989.71. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: BMW27 - Compute: CPU-OnlyDynatron A26Noctua NH-U9 TR4-SP3Dynatron A38714212835SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 331.8531.9231.92

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TNDynatron A26Noctua NH-U9 TR4-SP3Dynatron A3820406080100SE +/- 0.03, N = 14SE +/- 0.04, N = 15SE +/- 0.13, N = 1592.192.191.91. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughDynatron A38Noctua NH-U9 TR4-SP3Dynatron A26246810SE +/- 0.0064, N = 6SE +/- 0.0055, N = 6SE +/- 0.0081, N = 67.98987.99258.00621. (CXX) g++ options: -O3 -flto -pthread

GNU GMP GMPbench

GMPbench is a test of the GNU Multiple Precision Arithmetic (GMP) Library. GMPbench is a single-threaded integer benchmark that leverages the GMP library to stress the CPU with widening integer multiplication. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGMPbench Score, More Is BetterGNU GMP GMPbench 6.2.1Total TimeDynatron A26Noctua NH-U9 TR4-SP3Dynatron A38110022003300440055005099.25098.85089.11. (CC) gcc options: -O3 -fomit-frame-pointer -lm

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileDynatron A26Dynatron A38Noctua NH-U9 TR4-SP3510152025SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 319.8019.8019.84

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38100200300400500SE +/- 1.14, N = 10SE +/- 1.26, N = 10SE +/- 1.32, N = 10471.60471.09470.691. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38100200300400500SE +/- 1.02, N = 10SE +/- 1.01, N = 10SE +/- 0.90, N = 10468.76468.59467.961. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Pabellon Barcelona - Compute: CPU-OnlyNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 393.5993.7293.72

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhaseNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A38130260390520650SE +/- 0.62, N = 3SE +/- 0.72, N = 3SE +/- 0.49, N = 3591.8591.6591.0

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Classroom - Compute: CPU-OnlyDynatron A26Dynatron A38Noctua NH-U9 TR4-SP320406080100SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.08, N = 380.8680.9280.96

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUNoctua NH-U9 TR4-SP3Dynatron A38Dynatron A26140280420560700SE +/- 0.56, N = 3SE +/- 1.18, N = 3SE +/- 1.29, N = 3665.23665.53666.05MIN: 638.88MIN: 639.97MIN: 638.641. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionDynatron A38Dynatron A26Noctua NH-U9 TR4-SP31.16022.32043.48064.64085.801SE +/- 0.02393955, N = 7SE +/- 0.02227690, N = 7SE +/- 0.01747253, N = 75.150517675.150651595.156530721. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Barbershop - Compute: CPU-OnlyDynatron A26Dynatron A38Noctua NH-U9 TR4-SP320406080100SE +/- 0.08, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 3111.48111.53111.61

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformDynatron A26Dynatron A38Noctua NH-U9 TR4-SP320406080100SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 393.693.593.5

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomDynatron A38Dynatron A26Noctua NH-U9 TR4-SP33691215SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 311.4011.4011.39

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Dynatron A26Dynatron A38Noctua NH-U9 TR4-SP3300M600M900M1200M1500MSE +/- 3637917.60, N = 3SE +/- 2380709.51, N = 3SE +/- 3887729.99, N = 31614933333161456666716137666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUDynatron A26Noctua NH-U9 TR4-SP3Dynatron A38140280420560700SE +/- 1.25, N = 3SE +/- 1.05, N = 3SE +/- 0.96, N = 3664.81664.99665.27MIN: 637.03MIN: 636.72MIN: 637.881. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUDynatron A38Dynatron A26Noctua NH-U9 TR4-SP30.1980.3960.5940.7920.99SE +/- 0.000819, N = 7SE +/- 0.000564, N = 7SE +/- 0.000667, N = 70.8794800.8797520.880047MIN: 0.84MIN: 0.84MIN: 0.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaDynatron A38Dynatron A26Noctua NH-U9 TR4-SP30.63681.27361.91042.54723.184SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.832.832.831. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomDynatron A38Dynatron A26Noctua NH-U9 TR4-SP30.2160.4320.6480.8641.08SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.960.960.961. (CXX) g++ options: -O3 -pthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pDynatron A38Dynatron A26Noctua NH-U9 TR4-SP30.02930.05860.08790.11720.1465SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.1300.1300.1301. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KDynatron A38Dynatron A26Noctua NH-U9 TR4-SP30.0450.090.1350.180.225SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.20.20.21. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pDynatron A38Dynatron A26Noctua NH-U9 TR4-SP30.11250.2250.33750.450.5625SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.500.500.501. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

CPU Temperature Monitor

OpenBenchmarking.orgCelsiusCPU Temperature MonitorPhoronix Test Suite System MonitoringDynatron A38Noctua NH-U9 TR4-SP3Dynatron A261530456075Min: 40.25 / Avg: 51.96 / Max: 70.25Min: 41.5 / Avg: 56.86 / Max: 79.5Min: 41 / Avg: 59.01 / Max: 79.25

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-NNoctua NH-U9 TR4-SP3Dynatron A26Dynatron A3820406080100SE +/- 7.63, N = 15SE +/- 9.76, N = 14SE +/- 4.88, N = 1588.682.678.91. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPYDynatron A26Dynatron A38Noctua NH-U9 TR4-SP32004006008001000SE +/- 28.51, N = 14SE +/- 27.57, N = 15SE +/- 30.05, N = 151052104410351. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

SVT-VP9

MinAvgMaxDynatron A3844.046.554.5Noctua NH-U9 TR4-SP347.049.758.5Dynatron A2647.550.658.0OpenBenchmarking.orgCelsius, Fewer Is BetterSVT-VP9 0.3CPU Temperature Monitor1632486480

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pDynatron A26Dynatron A38Noctua NH-U9 TR4-SP380160240320400SE +/- 6.37, N = 15SE +/- 6.16, N = 15SE +/- 5.87, N = 15347.78346.82345.241. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

108 Results Shown

Xcompact3d Incompact3d
oneDNN
AOM AV1:
  Speed 9 Realtime - Bosphorus 1080p
  Speed 4 Two-Pass - Bosphorus 4K
Stockfish
AOM AV1
ViennaCL
srsLTE
oneDNN
AOM AV1
ViennaCL
AOM AV1
Xcompact3d Incompact3d
AOM AV1
Mobile Neural Network
ViennaCL
oneDNN
Timed Erlang/OTP Compilation
GNU Radio
OpenSCAD
SVT-AV1
oneDNN:
  IP Shapes 3D - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
Mobile Neural Network
LuaRadio
OpenSCAD
oneDNN
OpenSCAD
Chaos Group V-RAY
AOM AV1
Mobile Neural Network
oneDNN
OpenSCAD
AOM AV1
ASTC Encoder
AOM AV1
Mobile Neural Network
oneDNN:
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
SVT-AV1
simdjson
AOM AV1
SVT-HEVC
ViennaCL
IndigoBench
oneDNN
GNU Radio:
  FM Deemphasis Filter
  IIR Filter
srsLTE
GNU Radio
ViennaCL
oneDNN
SVT-HEVC
oneDNN
Mobile Neural Network
srsLTE
ViennaCL:
  CPU BLAS - dGEMM-NT
  CPU BLAS - dGEMM-NN
Timed Node.js Compilation
GNU Radio
Timed GDB GNU Debugger Compilation
ASTC Encoder
Liquid-DSP
GNU Radio
Blender
SVT-HEVC
GROMACS
Liquid-DSP
LuaRadio
Timed Apache Compilation
ViennaCL
Liquid-DSP
Timed Linux Kernel Compilation
OpenSCAD
oneDNN
NAMD
simdjson
oneDNN:
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  IP Shapes 1D - f32 - CPU
ViennaCL
Blender
ViennaCL
ASTC Encoder
GNU GMP GMPbench
Timed Mesa Compilation
SVT-VP9:
  PSNR/SSIM Optimized - Bosphorus 1080p
  VMAF Optimized - Bosphorus 1080p
Blender
LuaRadio
Blender
oneDNN
Xcompact3d Incompact3d
Blender
LuaRadio
IndigoBench
Liquid-DSP
oneDNN:
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
simdjson:
  Kostya
  LargeRand
SVT-AV1
AOM AV1:
  Speed 0 Two-Pass - Bosphorus 4K
  Speed 0 Two-Pass - Bosphorus 1080p
CPU Temperature Monitor
ViennaCL:
  CPU BLAS - dGEMV-N
  CPU BLAS - sCOPY
SVT-VP9
SVT-VP9