dddas

AMD Ryzen Threadripper 3970X 32-Core testing with a ASUS ROG ZENITH II EXTREME (1603 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2306242-NE-DDDAS565146
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
June 23 2023
  13 Hours, 58 Minutes
b
June 24 2023
  4 Hours, 14 Minutes
Invert Behavior (Only Show Selected Data)
  9 Hours, 6 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


dddasOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper 3970X 32-Core @ 3.70GHz (32 Cores / 64 Threads)ASUS ROG ZENITH II EXTREME (1603 BIOS)AMD Starship/Matisse64GBSamsung SSD 980 PRO 500GBAMD Radeon RX 5700 8GB (1750/875MHz)AMD Navi 10 HDMI AudioASUS VP28UAquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 22.045.19.0-051900rc7-generic (x86_64)GNOME Shell 42.2X Server + Wayland4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47)1.2.204GCC 11.3.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionDddas PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-aYxV0E/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-aYxV0E/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830104d- BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: 113-D1820201-101- Python 3.10.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

a vs. b ComparisonPhoronix Test SuiteBaseline+53%+53%+106%+106%+159%+159%211.8%19.1%18%8.9%8.1%6.8%4%3.9%3.6%3.5%3.5%3%2.6%2.6%2.6%2.6%2.5%2.2%2.1%2%2%Socket ActivityFill Sync50.5%Fill Sync50%IP Shapes 1D - f32 - CPUPipeggml-small.en - 2.S.o.t.UIP Shapes 1D - u8s8f32 - CPUSemaphoresCPU Cache5.8%D.B.s - u8s8f32 - CPU5.1%N.Q.A.B.b.u.S.1.P - S.S.SN.Q.A.B.b.u.S.1.P - S.S.SIP Shapes 3D - u8s8f32 - CPU3.9%ggml-base.en - 2.S.o.t.UC.C.R.5.I - S.S.SC.C.R.5.I - S.S.SFeCO6_b3lyp_gmsVector Shuffle2.8%Li2_STO_aeSENDFILEN.T.C.D.m - S.S.SN.T.C.D.m - S.S.S28R.N.N.I - u8s8f32 - CPU2.2%2560 x 1440 - LowH20-64Seek RandStress-NGLevelDBLevelDBoneDNNStress-NGWhisper.cpponeDNNStress-NGStress-NGoneDNNNeural Magic DeepSparseNeural Magic DeepSparseoneDNNWhisper.cppNeural Magic DeepSparseNeural Magic DeepSparseQMCPACKStress-NGQMCPACKStress-NGNeural Magic DeepSparseNeural Magic DeepSparseSQLiteSQLiteoneDNNXonoticCP2K Molecular DynamicsLevelDBab

dddaswhisper-cpp: ggml-medium.en - 2016 State of the Unionwhisper-cpp: ggml-small.en - 2016 State of the Unionsqlite: 64sqlite: 32libxsmm: 128sqlite: 16sqlite: 4onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUpetsc: Streamssqlite: 8sqlite: 2nekrs: Kershawnekrs: TurboPipe Periodichpcg: 104 104 104 - 60qmcpack: FeCO6_b3lyp_gmslibxsmm: 256mocassin: Dust 2D tau100.0qmcpack: FeCO6_b3lyp_gmspalabos: 100ospray: particle_volume/scivis/real_timewhisper-cpp: ggml-base.en - 2016 State of the Unionospray: particle_volume/pathtracer/real_timepalabos: 400palabos: 500qmcpack: Li2_STO_aeleveldb: Seq Fillleveldb: Seq Fillxonotic: 3840 x 2160 - Ultimateleveldb: Rand Deleteheffte: c2c - FFTW - double-long - 512heffte: c2c - Stock - double-long - 512stress-ng: Socket Activitystress-ng: Pipelaghos: Sedov Blast Wave, ube_922_hex.meshgpaw: Carbon Nanotubevvenc: Bosphorus 4K - Fastsqlite: 1ospray: particle_volume/ao/real_timexonotic: 2560 x 1440 - Ultimatexonotic: 1920 x 1200 - Ultimatexonotic: 1920 x 1080 - Ultimatexonotic: 3840 x 2160 - Ultraonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUxonotic: 3840 x 2160 - Highonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUxonotic: 1920 x 1080 - Ultraxonotic: 1920 x 1200 - Ultraxonotic: 2560 x 1440 - Ultraz3: 2.smt2ospray: gravity_spheres_volume/dim_512/scivis/real_timexonotic: 1920 x 1080 - Highxonotic: 2560 x 1440 - Highxonotic: 1920 x 1200 - Highospray: gravity_spheres_volume/dim_512/ao/real_timeheffte: r2c - FFTW - double-long - 512leveldb: Seek Randospray: gravity_spheres_volume/dim_512/pathtracer/real_timeheffte: r2c - Stock - double-long - 512deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamxonotic: 3840 x 2160 - Lowcp2k: Fayalite-FISTxonotic: 1920 x 1080 - Lowxonotic: 1920 x 1200 - Lowxonotic: 2560 x 1440 - Lowdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamonednn: IP Shapes 1D - u8s8f32 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamvvenc: Bosphorus 4K - Fasterdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamkripke: deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamlaghos: Triple Point Problemdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamsvt-av1: Preset 4 - Bosphorus 4Kvvenc: Bosphorus 1080p - Fastleveldb: Rand Readleveldb: Hot Readdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamencode-opus: WAV To Opus Encodeonednn: IP Shapes 1D - f32 - CPUdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamespeak: Text-To-Speech Synthesisdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamstress-ng: Futexdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamlibxsmm: 64libxsmm: 32oidn: RTLightmap.hdr.4096x4096 - CPU-Onlystress-ng: IO_uringstress-ng: MMAPstress-ng: Mallocstress-ng: Cloningstress-ng: MEMFDstress-ng: Atomicstress-ng: CPU Cacheliquid-dsp: 64 - 256 - 512liquid-dsp: 8 - 256 - 512stress-ng: Zlibliquid-dsp: 32 - 256 - 512stress-ng: Pthreadliquid-dsp: 8 - 256 - 32liquid-dsp: 8 - 256 - 57stress-ng: Memory Copyingstress-ng: NUMAliquid-dsp: 16 - 256 - 512stress-ng: Matrix 3D Mathstress-ng: Vector Shufflestress-ng: Function Callstress-ng: Semaphoresstress-ng: Wide Vector Mathstress-ng: Vector Floating Pointstress-ng: Glibc C String Functionsliquid-dsp: 64 - 256 - 57stress-ng: System V Message Passingstress-ng: Floating Pointliquid-dsp: 4 - 256 - 512stress-ng: Pollliquid-dsp: 64 - 256 - 32stress-ng: Mutexstress-ng: AVL Treestress-ng: Cryptoliquid-dsp: 32 - 256 - 57stress-ng: Context Switchingstress-ng: Forkingstress-ng: Vector Mathstress-ng: Matrix Mathstress-ng: Hashstress-ng: Glibc Qsort Data Sortingstress-ng: CPU Stressstress-ng: SENDFILEstress-ng: Fused Multiply-Addliquid-dsp: 32 - 256 - 32liquid-dsp: 2 - 256 - 512liquid-dsp: 16 - 256 - 57liquid-dsp: 16 - 256 - 32liquid-dsp: 1 - 256 - 512liquid-dsp: 1 - 256 - 32liquid-dsp: 2 - 256 - 32liquid-dsp: 4 - 256 - 57liquid-dsp: 4 - 256 - 32liquid-dsp: 2 - 256 - 57liquid-dsp: 1 - 256 - 57z3: 1.smt2embree: Pathtracer ISPC - Asian Dragon Objqmcpack: simple-H2Oembree: Pathtracer - Asian Dragon Objleveldb: Rand Fillleveldb: Rand Fillleveldb: Overwriteleveldb: Overwritevvenc: Bosphorus 1080p - Fasterdav1d: Chimera 1080p 10-bitremhos: Sample Remap Exampledav1d: Chimera 1080pcp2k: H20-64onednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUembree: Pathtracer ISPC - Crownembree: Pathtracer - Crownoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyembree: Pathtracer ISPC - Asian Dragonoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlydav1d: Summer Nature 4Ksvt-av1: Preset 4 - Bosphorus 1080pembree: Pathtracer - Asian Dragonheffte: c2c - FFTW - double-long - 256heffte: c2c - Stock - double-long - 256mocassin: Gas HII40svt-av1: Preset 8 - Bosphorus 4Kleveldb: Fill Syncleveldb: Fill Synconednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUsvt-av1: Preset 8 - Bosphorus 1080pheffte: r2c - FFTW - double-long - 256svt-av1: Preset 12 - Bosphorus 4Kheffte: r2c - Stock - double-long - 256svt-av1: Preset 13 - Bosphorus 4Konednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUdav1d: Summer Nature 1080pheffte: r2c - FFTW - double-long - 128onednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUsvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pheffte: c2c - Stock - double-long - 128heffte: c2c - FFTW - double-long - 128heffte: r2c - Stock - double-long - 128oidn: RT.hdr_alb_nrm.3840x2160 - Radeon HIPab1018.28439395.70935681.448505.417635.8373.820266.581935.61158312.0964291.254243.2192123046667344456666710.9645196.98910.4181.265175.39121.9319.74548156.48335128.572139.299143.850136.22254.94227.8311.3989499245.16015.346415.40823072.8018809740.35264.34110.8465.44106.0149.86893384.0158631384.7672496386.9438277420.74576623252.183235.413244.46467.6921769938.102976.467518.6729034521.4981114520.789315676.0114.62468561.4381904560.9659015561.00577484.9355427.711065.8397.6766830.0147185.738786.1294670.0380428123.826671.4224194671.9542910673.171491462.4431255.9872552.281228.8773556.609028.60621.177229134.4071119.013410.931482.746933.099014824333334.803828.727922.783143.882861.172716.345361.198816.3383220.4612.883077.57853.75613.87543.49343.13746.352321.567528.6951.5509967.8249235.813912.288181.337331.077107.3698148.974310.910491.566349.4917323.09174610857.407.0704141.2991318.5160.50.60439798.24437.1192853207.133354.40395.11480.061624118.54506326667821236674517.78313753333128353.6435457000040918333310973.65752.301601133332806.0922825.4424278.3466510329.661501239.2994803.7633453867.32183603333310692419.8811201.44412776674084623.29225073333318827346.28283.4178260.17150626666711409509.7751344.69224417.23199178.687627578.66942.2282729.76515575.4733507543.081343200000208516677951033336908000001053766745075000898963332060866671781000001038066675199333329.93233.831127.60237.3900262.98126.9262.35427.024.877374.7923.537398.3942.9665.692061.3663034.408738.46691.2239.39621.22222.5210.86841.585013.763813.876412.68454.14810866.0140.64.266240.94845085.30827.4329126.42330.1443127.1365.767694.81893597.0256.45242.685661.57740308.249360.92426.551730.802451.88101003.11362363.32431680.811502.766635.4374.33262.624932.98758276.7926284.865237.1872109640000344177000011.0163191.15907.4180.727174.82122.239.76771151.04973128.663140.078144.062132.76255.48827.7311.7695344245.18215.350815.41359580.2722201912.16265.390123291110.9495.3871059.89124381.3165184386.8054656386.2753935423.32974573275.763226.73227.02469.7486112958.843987.992524.5527153521.2002993527.211481276.1224.62434563.7930829567.977016567.93886484.9802827.730364.5627.7005130.036183.814487.0186675.8397046122.975669.8590201676.0787756687.486131462.3112256.6381551.1628.9177550.432528.98811.08944134.323119.087910.892485.947832.906614621560033.478329.862623.223743.047560.662116.482860.859316.4295219.1212.747378.40033.72113.76743.54742.75746.511521.493528.831.3018167.5353236.822811.979983.427831.488106.6629149.962910.861691.979849.319324.273146447156.833146.1891318.7160.70.60440335.12439.2492812375.373360.52394.5480.511535034.64506050000822240004518.88314560000128387.5735511000041009000010984.91741.661601300002795.8522200.8624275.2371041068.481496970.6695693.9333092079.25183700000010677047.7911221.55414750004101817.96226970000018816044.49282.4278455.19151210000011620881.0451160.22224460.71200423.67624159.24943.8482887.24528847.3133539318.971350000000209890007994900006921300001056000045023000896860002061400001791700001032600005181400029.89833.959327.48437.4822264.86826.7262.2832724.801374.1323.654398.242.1125.731811.4366434.526338.451.2239.3991.23222.2410.84541.78213.7913.853612.59854.55616348.3710.44.216820.98509885.49627.2699127.67530.2863127.685.705744.83565597.1955.85152.698721.5719305.73364.2826.809730.857950.934OpenBenchmarking.org

Whisper.cpp

Whisper.cpp is a port of OpenAI's Whisper model in C/C++. Whisper.cpp is developed by Georgi Gerganov for transcribing WAV audio files to text / speech recognition. Whisper.cpp supports ARM NEON, x86 AVX, and other advanced CPU features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-medium.en - Input: 2016 State of the Unionba2004006008001000SE +/- 11.84, N = 31003.111018.281. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-small.en - Input: 2016 State of the Unionba90180270360450SE +/- 6.52, N = 9363.32395.711. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 64ba150300450600750SE +/- 0.71, N = 3680.81681.451. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 32ba110220330440550SE +/- 1.08, N = 3502.77505.421. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128ab140280420560700SE +/- 0.22, N = 3635.8635.41. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 16ab80160240320400SE +/- 1.37, N = 3373.82374.331. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 4ba60120180240300SE +/- 2.89, N = 4262.62266.581. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUba2004006008001000SE +/- 7.63, N = 15932.99935.61MIN: 924.9MIN: 895.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PETSc

PETSc, the Portable, Extensible Toolkit for Scientific Computation, is for the scalable (parallel) solution of scientific applications modeled by partial differential equations. This test profile runs the PETSc "make streams" benchmark and records the throughput rate when all available cores are utilized for the MPI Streams build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterPETSc 3.19Test: Streamsab12K24K36K48K60KSE +/- 71.95, N = 358312.1058276.791. (CC) gcc options: -fPIC -O3 -O2 -lpthread -ludev -lpciaccess -lm

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 8ba60120180240300SE +/- 2.25, N = 3284.87291.251. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 2ba50100150200250SE +/- 1.34, N = 3237.19243.221. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: Kershawab500M1000M1500M2000M2500MSE +/- 3171604.92, N = 3212304666721096400001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: TurboPipe Periodicab700M1400M2100M2800M3500MSE +/- 1942175.18, N = 3344456666734417700001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 104 104 104 - RT: 60ba3691215SE +/- 0.02, N = 311.0210.961. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.16Input: FeCO6_b3lyp_gmsba4080120160200SE +/- 1.72, N = 3191.15196.981. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 256ab2004006008001000SE +/- 3.58, N = 3910.4907.41. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2.02.73.3Input: Dust 2D tau100.0ba4080120160200SE +/- 0.15, N = 3180.73181.271. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O2 -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lz

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.16Input: FeCO6_b3lyp_gmsba4080120160200SE +/- 0.14, N = 3174.82175.391. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

Palabos

The Palabos library is a framework for general purpose Computational Fluid Dynamics (CFD). Palabos uses a kernel based on the Lattice Boltzmann method. This test profile uses the Palabos MPI-based Cavity3D benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 100ba306090120150SE +/- 0.14, N = 3122.23121.931. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeba3691215SE +/- 0.00531, N = 39.767719.74548

Whisper.cpp

Whisper.cpp is a port of OpenAI's Whisper model in C/C++. Whisper.cpp is developed by Georgi Gerganov for transcribing WAV audio files to text / speech recognition. Whisper.cpp supports ARM NEON, x86 AVX, and other advanced CPU features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-base.en - Input: 2016 State of the Unionba306090120150SE +/- 1.99, N = 3151.05156.481. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeba306090120150SE +/- 0.05, N = 3128.66128.57

Palabos

The Palabos library is a framework for general purpose Computational Fluid Dynamics (CFD). Palabos uses a kernel based on the Lattice Boltzmann method. This test profile uses the Palabos MPI-based Cavity3D benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 400ba306090120150SE +/- 0.57, N = 3140.08139.301. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 500ba306090120150SE +/- 0.27, N = 3144.06143.851. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.16Input: Li2_STO_aeba306090120150SE +/- 0.40, N = 3132.76136.221. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.23Benchmark: Sequential Fillab60120180240300SE +/- 0.72, N = 3254.94255.491. (CXX) g++ options: -fno-exceptions -fno-rtti -O3 -lgmock -lgtest -lsnappy -ltcmalloc

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.23Benchmark: Sequential Fillab714212835SE +/- 0.07, N = 327.827.71. (CXX) g++ options: -fno-exceptions -fno-rtti -O3 -lgmock -lgtest -lsnappy -ltcmalloc

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game for this open-source first person shooter title. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 3840 x 2160 - Effects Quality: Ultimateba70140210280350SE +/- 0.68, N = 3311.77311.40MIN: 98 / MAX: 488MIN: 97 / MAX: 487

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.23Benchmark: Random Deleteab50100150200250SE +/- 0.46, N = 3245.16245.181. (CXX) g++ options: -fno-exceptions -fno-rtti -O3 -lgmock -lgtest -lsnappy -ltcmalloc

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double-long - X Y Z: 512ba48121620SE +/- 0.00, N = 315.3515.351. (CXX) g++ options: -O3

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double-long - X Y Z: 512ba48121620SE +/- 0.01, N = 315.4115.411. (CXX) g++ options: -O3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Socket Activityba2K4K6K8K10KSE +/- 1064.20, N = 159580.273072.801. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pipeba5M10M15M20M25MSE +/- 858971.94, N = 1522201912.1618809740.351. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Laghos

Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Sedov Blast Wave, ube_922_hex.meshba60120180240300SE +/- 0.22, N = 3265.39264.341. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 23.6Input: Carbon Nanotubeab20406080100SE +/- 0.26, N = 3110.85110.951. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 4K - Video Preset: Fastab1.2242.4483.6724.8966.12SE +/- 0.015, N = 35.4405.3871. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 1ba20406080100SE +/- 0.30, N = 3105.00106.011. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeba3691215SE +/- 0.00341, N = 39.891249.86893

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game for this open-source first person shooter title. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 2560 x 1440 - Effects Quality: Ultimateab80160240320400SE +/- 1.62, N = 3384.02381.32MIN: 99 / MAX: 847MIN: 106 / MAX: 824

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1200 - Effects Quality: Ultimateba80160240320400SE +/- 0.49, N = 3386.81384.77MIN: 104 / MAX: 887MIN: 102 / MAX: 919

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1080 - Effects Quality: Ultimateab80160240320400SE +/- 2.17, N = 3386.94386.28MIN: 97 / MAX: 892MIN: 101 / MAX: 871

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 3840 x 2160 - Effects Quality: Ultraba90180270360450SE +/- 0.23, N = 3423.33420.75MIN: 194 / MAX: 581MIN: 194 / MAX: 579

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUab7001400210028003500SE +/- 29.27, N = 33252.183275.76MIN: 3200.87MIN: 3269.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUba7001400210028003500SE +/- 25.05, N = 33226.703235.41MIN: 3215.86MIN: 3194.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUba7001400210028003500SE +/- 29.09, N = 33227.023244.46MIN: 3219.36MIN: 3177.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game for this open-source first person shooter title. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 3840 x 2160 - Effects Quality: Highba100200300400500SE +/- 0.24, N = 3469.75467.69MIN: 225 / MAX: 637MIN: 222 / MAX: 635

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUab2004006008001000SE +/- 8.51, N = 3938.10958.84MIN: 914.11MIN: 951.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUab2004006008001000SE +/- 3.39, N = 3976.47987.99MIN: 961.99MIN: 979.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game for this open-source first person shooter title. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1080 - Effects Quality: Ultraba110220330440550SE +/- 1.23, N = 3524.55518.67MIN: 285 / MAX: 905MIN: 259 / MAX: 910

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1200 - Effects Quality: Ultraab110220330440550SE +/- 0.46, N = 3521.50521.20MIN: 282 / MAX: 935MIN: 285 / MAX: 919

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 2560 x 1440 - Effects Quality: Ultraba110220330440550SE +/- 2.20, N = 3527.21520.79MIN: 294 / MAX: 931MIN: 272 / MAX: 931

Z3 Theorem Prover

The Z3 Theorem Prover / SMT solver is developed by Microsoft Research under the MIT license. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 2.smt2ab20406080100SE +/- 0.12, N = 376.0176.121. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeab1.04062.08123.12184.16245.203SE +/- 0.00170, N = 34.624684.62434

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game for this open-source first person shooter title. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1080 - Effects Quality: Highba120240360480600SE +/- 1.66, N = 3563.79561.44MIN: 337 / MAX: 945MIN: 330 / MAX: 956

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 2560 x 1440 - Effects Quality: Highba120240360480600SE +/- 0.81, N = 3567.98560.97MIN: 347 / MAX: 923MIN: 336 / MAX: 962

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1200 - Effects Quality: Highba120240360480600SE +/- 3.14, N = 3567.94561.01MIN: 343 / MAX: 932MIN: 341 / MAX: 967

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeba1.12062.24123.36184.48245.603SE +/- 0.00393, N = 34.980284.93554

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double-long - X Y Z: 512ba714212835SE +/- 0.01, N = 327.7327.711. (CXX) g++ options: -O3

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.23Benchmark: Seek Randomba1530456075SE +/- 0.19, N = 364.5665.841. (CXX) g++ options: -fno-exceptions -fno-rtti -O3 -lgmock -lgtest -lsnappy -ltcmalloc

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeba246810SE +/- 0.01066, N = 37.700517.67668

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double-long - X Y Z: 512ba714212835SE +/- 0.04, N = 330.0430.011. (CXX) g++ options: -O3

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamba4080120160200SE +/- 1.22, N = 3183.81185.74

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamba20406080100SE +/- 0.56, N = 387.0286.13

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game for this open-source first person shooter title. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 3840 x 2160 - Effects Quality: Lowba150300450600750SE +/- 1.56, N = 3675.84670.04MIN: 413 / MAX: 1166MIN: 387 / MAX: 1175

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. More details on the CP2K benchmark test cases and details can be found @ https://www.cp2k.org/performance Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2023.1Input: Fayalite-FISTba306090120150122.98123.831. (F9X) gfortran options: -fopenmp -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kdbm -lcp2kgrid -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -lhdf5 -lhdf5_hl -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -lopenblas -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

Xonotic

This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game for this open-source first person shooter title. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1080 - Effects Quality: Lowab140280420560700SE +/- 0.98, N = 3671.42669.86MIN: 430 / MAX: 1177MIN: 439 / MAX: 1136

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 1920 x 1200 - Effects Quality: Lowba150300450600750SE +/- 1.00, N = 3676.08671.95MIN: 427 / MAX: 1181MIN: 431 / MAX: 1193

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.6Resolution: 2560 x 1440 - Effects Quality: Lowba150300450600750SE +/- 2.49, N = 3687.49673.17MIN: 439 / MAX: 1194MIN: 426 / MAX: 1185

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamba1428425670SE +/- 0.10, N = 362.3162.44

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamba60120180240300SE +/- 0.36, N = 3256.64255.99

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamba120240360480600SE +/- 1.59, N = 3551.16552.28

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamba714212835SE +/- 0.10, N = 328.9228.88

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamba120240360480600SE +/- 0.31, N = 3550.43556.61

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamba714212835SE +/- 0.06, N = 328.9928.61

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUba0.26490.52980.79471.05961.3245SE +/- 0.016952, N = 141.0894401.177229MIN: 0.97MIN: 0.891. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamba306090120150SE +/- 0.21, N = 3134.32134.41

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamba306090120150SE +/- 0.18, N = 3119.09119.01

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 4K - Video Preset: Fasterab3691215SE +/- 0.04, N = 310.9310.891. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab110220330440550SE +/- 2.43, N = 3482.75485.95

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab816243240SE +/- 0.16, N = 333.1032.91

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.6ab30M60M90M120M150MSE +/- 636875.17, N = 31482433331462156001. (CXX) g++ options: -O3 -fopenmp -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamba816243240SE +/- 0.22, N = 333.4834.80

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamba714212835SE +/- 0.18, N = 329.8628.73

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamab612182430SE +/- 0.14, N = 322.7823.22

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamab1020304050SE +/- 0.26, N = 343.8843.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamba1428425670SE +/- 0.04, N = 360.6661.17

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamba48121620SE +/- 0.01, N = 316.4816.35

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamba1428425670SE +/- 0.05, N = 360.8661.20

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamba48121620SE +/- 0.01, N = 316.4316.34

Laghos

Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Triple Point Problemab50100150200250SE +/- 0.34, N = 3220.46219.121. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamba3691215SE +/- 0.03, N = 312.7512.88

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamba20406080100SE +/- 0.19, N = 378.4077.58

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 4Kab0.84511.69022.53533.38044.2255SE +/- 0.010, N = 33.7563.7211. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 1080p - Video Preset: Fastab48121620SE +/- 0.04, N = 313.8813.771. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.23Benchmark: Random Readab1020304050SE +/- 0.19, N = 343.4943.551. (CXX) g++ options: -fno-exceptions -fno-rtti -O3 -lgmock -lgtest -lsnappy -ltcmalloc

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.23Benchmark: Hot Readba1020304050SE +/- 0.21, N = 342.7643.141. (CXX) g++ options: -fno-exceptions -fno-rtti -O3 -lgmock -lgtest -lsnappy -ltcmalloc

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamab1122334455SE +/- 0.03, N = 346.3546.51

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamab510152025SE +/- 0.01, N = 321.5721.49

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.4WAV To Opus Encodeab714212835SE +/- 0.05, N = 528.7028.831. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUba0.3490.6981.0471.3961.745SE +/- 0.01212, N = 101.301811.55099MIN: 1.19MIN: 1.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamba1530456075SE +/- 0.04, N = 367.5467.82

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamba50100150200250SE +/- 0.13, N = 3236.82235.81

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamba3691215SE +/- 0.03, N = 311.9812.29

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamba20406080100SE +/- 0.18, N = 383.4381.34

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 1.51Text-To-Speech Synthesisab714212835SE +/- 0.34, N = 431.0831.491. (CXX) g++ options: -O2

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamba20406080100SE +/- 0.10, N = 3106.66107.37

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamba306090120150SE +/- 0.14, N = 3149.96148.97

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamba3691215SE +/- 0.02, N = 310.8610.91

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamba20406080100SE +/- 0.13, N = 391.9891.57

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamba1122334455SE +/- 0.05, N = 349.3249.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamba70140210280350SE +/- 0.39, N = 3324.27323.09

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Futexba1000K2000K3000K4000K5000KSE +/- 56259.21, N = 44644715.004610857.401. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamba246810SE +/- 0.0378, N = 36.83307.0704

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamba306090120150SE +/- 0.75, N = 3146.19141.30

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64ba70140210280350SE +/- 0.09, N = 3318.7318.51. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32ba4080120160200SE +/- 0.07, N = 3160.7160.51. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlyba0.1350.270.4050.540.675SE +/- 0.00, N = 30.600.60

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: IO_uringba90K180K270K360K450KSE +/- 726.45, N = 3440335.12439798.241. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: MMAPba100200300400500SE +/- 0.81, N = 3439.24437.111. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Mallocab20M40M60M80M100MSE +/- 44041.52, N = 392853207.1392812375.371. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cloningba7001400210028003500SE +/- 2.74, N = 33360.523354.401. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: MEMFDab90180270360450SE +/- 0.62, N = 3395.11394.501. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Atomicba100200300400500SE +/- 0.46, N = 3480.51480.061. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: CPU Cacheab300K600K900K1200K1500KSE +/- 19308.61, N = 31624118.541535034.641. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512ab110M220M330M440M550MSE +/- 148361.42, N = 35063266675060500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 512ba20M40M60M80M100MSE +/- 37834.43, N = 382224000821236671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Zlibba10002000300040005000SE +/- 2.96, N = 34518.884517.781. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512ba70M140M210M280M350MSE +/- 399263.21, N = 33145600003137533331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pthreadba30K60K90K120K150KSE +/- 521.45, N = 3128387.57128353.641. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 32ba80M160M240M320M400MSE +/- 120138.81, N = 33551100003545700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 57ba90M180M270M360M450MSE +/- 176099.72, N = 34100900004091833331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Memory Copyingba2K4K6K8K10KSE +/- 6.50, N = 310984.9110973.651. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: NUMAab160320480640800SE +/- 5.01, N = 3752.30741.661. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512ba30M60M90M120M150MSE +/- 32829.53, N = 31601300001601133331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix 3D Mathab6001200180024003000SE +/- 1.75, N = 32806.092795.851. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Shuffleab5K10K15K20K25KSE +/- 43.57, N = 322825.4422200.861. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Function Callab5K10K15K20K25KSE +/- 38.00, N = 324278.3424275.231. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Semaphoresba15M30M45M60M75MSE +/- 919418.56, N = 371041068.4866510329.661. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Wide Vector Mathab300K600K900K1200K1500KSE +/- 3256.42, N = 31501239.291496970.661. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Floating Pointba20K40K60K80K100KSE +/- 139.68, N = 395693.9394803.761. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Glibc C String Functionsab7M14M21M28M35MSE +/- 238790.03, N = 333453867.3233092079.251. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57ba400M800M1200M1600M2000MSE +/- 19718378.34, N = 3183700000018360333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: System V Message Passingab2M4M6M8M10MSE +/- 13654.50, N = 310692419.8810677047.791. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Floating Pointba2K4K6K8K10KSE +/- 7.25, N = 311221.5511201.441. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 512ba9M18M27M36M45MSE +/- 67087.84, N = 341475000412776671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pollba900K1800K2700K3600K4500KSE +/- 1922.15, N = 34101817.964084623.291. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32ba500M1000M1500M2000M2500MSE +/- 296273.15, N = 3226970000022507333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Mutexab4M8M12M16M20MSE +/- 22386.94, N = 318827346.2818816044.491. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: AVL Treeab60120180240300SE +/- 0.22, N = 3283.41282.421. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cryptoba20K40K60K80K100KSE +/- 78.61, N = 378455.1978260.171. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57ba300M600M900M1200M1500MSE +/- 2366666.67, N = 3151210000015062666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Context Switchingba2M4M6M8M10MSE +/- 22031.57, N = 311620881.0411409509.771. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Forkingab11K22K33K44K55KSE +/- 291.28, N = 351344.6951160.221. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Mathba50K100K150K200K250KSE +/- 22.96, N = 3224460.71224417.231. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix Mathba40K80K120K160K200KSE +/- 476.06, N = 3200423.60199178.681. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Hashab1.6M3.2M4.8M6.4M8MSE +/- 2470.62, N = 37627578.667624159.241. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Glibc Qsort Data Sortingba2004006008001000SE +/- 0.47, N = 3943.84942.221. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: CPU Stressba20K40K60K80K100KSE +/- 76.38, N = 382887.2482729.761. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: SENDFILEba110K220K330K440K550KSE +/- 656.59, N = 3528847.31515575.471. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Fused Multiply-Addba7M14M21M28M35MSE +/- 7490.49, N = 333539318.9733507543.081. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32ba300M600M900M1200M1500MSE +/- 2211334.44, N = 3135000000013432000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 512ba4M8M12M16M20MSE +/- 21712.77, N = 320989000208516671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57ba200M400M600M800M1000MSE +/- 377903.57, N = 37994900007951033331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32ba150M300M450M600M750MSE +/- 1128996.60, N = 36921300006908000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512ba2M4M6M8M10MSE +/- 21333.33, N = 310560000105376671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32ab10M20M30M40M50MSE +/- 21825.06, N = 345075000450230001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 32ab20M40M60M80M100MSE +/- 85545.96, N = 389896333896860001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 57ba40M80M120M160M200MSE +/- 210502.05, N = 32061400002060866671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 32ba40M80M120M160M200MSE +/- 81853.53, N = 31791700001781000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 57ab20M40M60M80M100MSE +/- 150591.43, N = 31038066671032600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57ab11M22M33M44M55MSE +/- 193694.20, N = 351993333518140001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Z3 Theorem Prover

The Z3 Theorem Prover / SMT solver is developed by Microsoft Research under the MIT license. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 1.smt2ba714212835SE +/- 0.01, N = 329.9029.931. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon Objba816243240SE +/- 0.06, N = 333.9633.83MIN: 33.75 / MAX: 34.43MIN: 33.53 / MAX: 34.4

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.16Input: simple-H2Oba612182430SE +/- 0.04, N = 327.4827.601. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon Objba918273645SE +/- 0.05, N = 337.4837.39MIN: 37.25 / MAX: 38.16MIN: 37.08 / MAX: 37.99

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.23Benchmark: Random Fillab60120180240300SE +/- 0.90, N = 3262.98264.871. (CXX) g++ options: -fno-exceptions -fno-rtti -O3 -lgmock -lgtest -lsnappy -ltcmalloc

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.23Benchmark: Random Fillab612182430SE +/- 0.09, N = 326.926.71. (CXX) g++ options: -fno-exceptions -fno-rtti -O3 -lgmock -lgtest -lsnappy -ltcmalloc

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.23Benchmark: Overwriteba60120180240300SE +/- 0.92, N = 3262.28262.351. (CXX) g++ options: -fno-exceptions -fno-rtti -O3 -lgmock -lgtest -lsnappy -ltcmalloc

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.23Benchmark: Overwriteba612182430SE +/- 0.09, N = 327.027.01. (CXX) g++ options: -fno-exceptions -fno-rtti -O3 -lgmock -lgtest -lsnappy -ltcmalloc

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 1080p - Video Preset: Fasterab612182430SE +/- 0.09, N = 324.8824.801. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080p 10-bitab80160240320400SE +/- 0.32, N = 3374.79374.131. (CC) gcc options: -pthread -lm

Remhos

Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap Exampleab612182430SE +/- 0.04, N = 323.5423.651. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080pab90180270360450SE +/- 0.11, N = 3398.39398.201. (CC) gcc options: -pthread -lm

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. More details on the CP2K benchmark test cases and details can be found @ https://www.cp2k.org/performance Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2023.1Input: H20-64ba102030405042.1142.971. (F9X) gfortran options: -fopenmp -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kdbm -lcp2kgrid -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -lhdf5 -lhdf5_hl -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -lopenblas -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUab1.28972.57943.86915.15886.4485SE +/- 0.03300, N = 35.692065.73181MIN: 4.03MIN: 4.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUab0.32320.64640.96961.29281.616SE +/- 0.01766, N = 31.366301.43664MIN: 1.27MIN: 1.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Crownba816243240SE +/- 0.09, N = 334.5334.41MIN: 34.2 / MAX: 35.09MIN: 33.95 / MAX: 35.09

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Crownab918273645SE +/- 0.08, N = 338.4738.45MIN: 37.94 / MAX: 39.09MIN: 38.09 / MAX: 38.98

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlyba0.27450.5490.82351.0981.3725SE +/- 0.00, N = 31.221.22

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragonba918273645SE +/- 0.02, N = 339.4039.40MIN: 39.18 / MAX: 39.86MIN: 39.15 / MAX: 40.06

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlyba0.27680.55360.83041.10721.384SE +/- 0.00, N = 31.231.22

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 4Kab50100150200250SE +/- 0.24, N = 3222.52222.241. (CC) gcc options: -pthread -lm

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 1080pab3691215SE +/- 0.05, N = 310.8710.851. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragonba1020304050SE +/- 0.05, N = 341.7841.59MIN: 41.55 / MAX: 42.41MIN: 41.23 / MAX: 42.19

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double-long - X Y Z: 256ba48121620SE +/- 0.01, N = 313.7913.761. (CXX) g++ options: -O3

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double-long - X Y Z: 256ab48121620SE +/- 0.01, N = 313.8813.851. (CXX) g++ options: -O3

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

X Y Z: 144 144 144 - RT: 60

a: The test quit with a non-zero exit status. E: cat: 'HPCG-Benchmark*.txt': No such file or directory

b: The test quit with a non-zero exit status. E: cat: 'HPCG-Benchmark*.txt': No such file or directory

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2.02.73.3Input: Gas HII40ba3691215SE +/- 0.05, N = 312.6012.681. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O2 -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lz

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 4Kba1224364860SE +/- 0.30, N = 354.5654.151. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.23Benchmark: Fill Syncab4K8K12K16K20KSE +/- 65.20, N = 310866.0116348.371. (CXX) g++ options: -fno-exceptions -fno-rtti -O3 -lgmock -lgtest -lsnappy -ltcmalloc

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.23Benchmark: Fill Syncab0.1350.270.4050.540.675SE +/- 0.00, N = 30.60.41. (CXX) g++ options: -fno-exceptions -fno-rtti -O3 -lgmock -lgtest -lsnappy -ltcmalloc

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. More details on the CP2K benchmark test cases and details can be found @ https://www.cp2k.org/performance Learn more via the OpenBenchmarking.org test page.

Input: H2O-DFT-LS

a: The test quit with a non-zero exit status. E: mpirun noticed that process rank 13 with PID 0 on node phoronix-System-Product-Name exited on signal 9 (Killed).

b: The test quit with a non-zero exit status. E: mpirun noticed that process rank 23 with PID 0 on node phoronix-System-Product-Name exited on signal 9 (Killed).

Palabos

The Palabos library is a framework for general purpose Computational Fluid Dynamics (CFD). Palabos uses a kernel based on the Lattice Boltzmann method. This test profile uses the Palabos MPI-based Cavity3D benchmark. Learn more via the OpenBenchmarking.org test page.

Grid Size: 1000

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUba0.95991.91982.87973.83964.7995SE +/- 0.01334, N = 34.216824.26624MIN: 4.1MIN: 4.121. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUab0.22160.44320.66480.88641.108SE +/- 0.010152, N = 30.9484500.985098MIN: 0.87MIN: 0.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 1080pba20406080100SE +/- 0.40, N = 385.5085.311. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double-long - X Y Z: 256ab612182430SE +/- 0.06, N = 327.4327.271. (CXX) g++ options: -O3

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 4Kba306090120150SE +/- 1.42, N = 4127.68126.421. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double-long - X Y Z: 256ba714212835SE +/- 0.11, N = 330.2930.141. (CXX) g++ options: -O3

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 4Kba306090120150SE +/- 0.08, N = 3127.68127.141. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUba1.29772.59543.89315.19086.4885SE +/- 0.00320, N = 35.705745.76769MIN: 5.64MIN: 5.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUab1.0882.1763.2644.3525.44SE +/- 0.01212, N = 34.818934.83565MIN: 4.74MIN: 4.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 1080pba130260390520650SE +/- 0.86, N = 3597.19597.021. (CC) gcc options: -pthread -lm

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double-long - X Y Z: 128ab1326395265SE +/- 0.45, N = 1556.4555.851. (CXX) g++ options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUab0.60721.21441.82162.42883.036SE +/- 0.01103, N = 32.685662.69872MIN: 2.61MIN: 2.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUba0.35490.70981.06471.41961.7745SE +/- 0.00305, N = 31.571901.57740MIN: 1.5MIN: 1.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 1080pab70140210280350SE +/- 1.75, N = 3308.25305.731. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 1080pba80160240320400SE +/- 1.35, N = 3364.28360.921. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Palabos

The Palabos library is a framework for general purpose Computational Fluid Dynamics (CFD). Palabos uses a kernel based on the Lattice Boltzmann method. This test profile uses the Palabos MPI-based Cavity3D benchmark. Learn more via the OpenBenchmarking.org test page.

Grid Size: 4000

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double-long - X Y Z: 128ba612182430SE +/- 0.17, N = 326.8126.551. (CXX) g++ options: -O3

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double-long - X Y Z: 128ba714212835SE +/- 0.37, N = 330.8630.801. (CXX) g++ options: -O3

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double-long - X Y Z: 128ab1224364860SE +/- 0.72, N = 351.8850.931. (CXX) g++ options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

Test: x86_64 RdRand

a: The test run did not produce a result. E: stress-ng: error: [1222741] No stress workers invoked (one or more were unsupported)

b: The test run did not produce a result. E: stress-ng: error: [3041301] No stress workers invoked (one or more were unsupported)

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

Run: RTLightmap.hdr.4096x4096 - Device: Intel oneAPI SYCL

a: The test quit with a non-zero exit status. E: Error: unsupported device type: SYCL

b: The test quit with a non-zero exit status. E: Error: unsupported device type: SYCL

Run: RT.ldr_alb_nrm.3840x2160 - Device: Intel oneAPI SYCL

a: The test quit with a non-zero exit status. E: Error: unsupported device type: SYCL

b: The test quit with a non-zero exit status. E: Error: unsupported device type: SYCL

Run: RT.hdr_alb_nrm.3840x2160 - Device: Intel oneAPI SYCL

a: The test quit with a non-zero exit status. E: Error: unsupported device type: SYCL

b: The test quit with a non-zero exit status. E: Error: unsupported device type: SYCL

Run: RTLightmap.hdr.4096x4096 - Device: Radeon HIP

a: The test quit with a non-zero exit status. E: Error: unsupported device type: HIP

b: The test quit with a non-zero exit status. E: Error: unsupported device type: HIP

Run: RT.ldr_alb_nrm.3840x2160 - Device: Radeon HIP

a: The test quit with a non-zero exit status. E: Error: unsupported device type: HIP

b: The test quit with a non-zero exit status. E: Error: unsupported device type: HIP

Run: RT.hdr_alb_nrm.3840x2160 - Device: Radeon HIP

a: The test quit with a non-zero exit status. E: Error: unsupported device type: HIP

b: The test quit with a non-zero exit status. E: Error: unsupported device type: HIP

218 Results Shown

Whisper.cpp:
  ggml-medium.en - 2016 State of the Union
  ggml-small.en - 2016 State of the Union
SQLite:
  64
  32
libxsmm
SQLite:
  16
  4
oneDNN
PETSc
SQLite:
  8
  2
nekRS:
  Kershaw
  TurboPipe Periodic
High Performance Conjugate Gradient
QMCPACK
libxsmm
Monte Carlo Simulations of Ionised Nebulae
QMCPACK
Palabos
OSPRay
Whisper.cpp
OSPRay
Palabos:
  400
  500
QMCPACK
LevelDB:
  Seq Fill:
    Microseconds Per Op
    MB/s
Xonotic
LevelDB
HeFFTe - Highly Efficient FFT for Exascale:
  c2c - FFTW - double-long - 512
  c2c - Stock - double-long - 512
Stress-NG:
  Socket Activity
  Pipe
Laghos
GPAW
VVenC
SQLite
OSPRay
Xonotic:
  2560 x 1440 - Ultimate
  1920 x 1200 - Ultimate
  1920 x 1080 - Ultimate
  3840 x 2160 - Ultra
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
Xonotic
oneDNN:
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
Xonotic:
  1920 x 1080 - Ultra
  1920 x 1200 - Ultra
  2560 x 1440 - Ultra
Z3 Theorem Prover
OSPRay
Xonotic:
  1920 x 1080 - High
  2560 x 1440 - High
  1920 x 1200 - High
OSPRay
HeFFTe - Highly Efficient FFT for Exascale
LevelDB
OSPRay
HeFFTe - Highly Efficient FFT for Exascale
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Xonotic
CP2K Molecular Dynamics
Xonotic:
  1920 x 1080 - Low
  1920 x 1200 - Low
  2560 x 1440 - Low
Neural Magic DeepSparse:
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
oneDNN
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
VVenC
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Kripke
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
Laghos
Neural Magic DeepSparse:
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream:
    ms/batch
    items/sec
SVT-AV1
VVenC
LevelDB:
  Rand Read
  Hot Read
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    ms/batch
    items/sec
Opus Codec Encoding
oneDNN
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
eSpeak-NG Speech Engine
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Stress-NG
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
libxsmm:
  64
  32
Intel Open Image Denoise
Stress-NG:
  IO_uring
  MMAP
  Malloc
  Cloning
  MEMFD
  Atomic
  CPU Cache
Liquid-DSP:
  64 - 256 - 512
  8 - 256 - 512
Stress-NG
Liquid-DSP
Stress-NG
Liquid-DSP:
  8 - 256 - 32
  8 - 256 - 57
Stress-NG:
  Memory Copying
  NUMA
Liquid-DSP
Stress-NG:
  Matrix 3D Math
  Vector Shuffle
  Function Call
  Semaphores
  Wide Vector Math
  Vector Floating Point
  Glibc C String Functions
Liquid-DSP
Stress-NG:
  System V Message Passing
  Floating Point
Liquid-DSP
Stress-NG
Liquid-DSP
Stress-NG:
  Mutex
  AVL Tree
  Crypto
Liquid-DSP
Stress-NG:
  Context Switching
  Forking
  Vector Math
  Matrix Math
  Hash
  Glibc Qsort Data Sorting
  CPU Stress
  SENDFILE
  Fused Multiply-Add
Liquid-DSP:
  32 - 256 - 32
  2 - 256 - 512
  16 - 256 - 57
  16 - 256 - 32
  1 - 256 - 512
  1 - 256 - 32
  2 - 256 - 32
  4 - 256 - 57
  4 - 256 - 32
  2 - 256 - 57
  1 - 256 - 57
Z3 Theorem Prover
Embree
QMCPACK
Embree
LevelDB:
  Rand Fill:
    Microseconds Per Op
    MB/s
  Overwrite:
    Microseconds Per Op
    MB/s
VVenC
dav1d
Remhos
dav1d
CP2K Molecular Dynamics
oneDNN:
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
Embree:
  Pathtracer ISPC - Crown
  Pathtracer - Crown
Intel Open Image Denoise
Embree
Intel Open Image Denoise
dav1d
SVT-AV1
Embree
HeFFTe - Highly Efficient FFT for Exascale:
  c2c - FFTW - double-long - 256
  c2c - Stock - double-long - 256
Monte Carlo Simulations of Ionised Nebulae
SVT-AV1
LevelDB:
  Fill Sync:
    Microseconds Per Op
    MB/s
oneDNN:
  IP Shapes 3D - f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
SVT-AV1
HeFFTe - Highly Efficient FFT for Exascale
SVT-AV1
HeFFTe - Highly Efficient FFT for Exascale
SVT-AV1
oneDNN:
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
dav1d
HeFFTe - Highly Efficient FFT for Exascale
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
SVT-AV1:
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p
HeFFTe - Highly Efficient FFT for Exascale:
  c2c - Stock - double-long - 128
  c2c - FFTW - double-long - 128
  r2c - Stock - double-long - 128