i7 8700K April

Intel Core i7-8700K testing with a ASUS TUF Z370-PLUS GAMING (2001 BIOS) and ASUS Intel UHD 630 CFL GT2 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2204241-NE-I78700KAP20
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
April 23 2022
  1 Hour, 52 Minutes
B
April 24 2022
  1 Hour, 51 Minutes
C
April 24 2022
  1 Hour, 51 Minutes
Invert Behavior (Only Show Selected Data)
  1 Hour, 51 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


i7 8700K AprilProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLOpenCLCompilerFile-SystemScreen ResolutionABCIntel Core i7-8700K @ 4.70GHz (6 Cores / 12 Threads)ASUS TUF Z370-PLUS GAMING (2001 BIOS)Intel 8th Gen Core16GB128GB Toshiba THNSN5128GPU7ASUS Intel UHD 630 CFL GT2 3GB (1200MHz)Realtek ALC887-VDDELL S2409WIntel I219-VUbuntu 20.045.9.0-050900rc6daily20200923-generic (x86_64) 20200922GNOME Shell 3.36.4X Server 1.20.134.6 Mesa 20.0.8OpenCL 2.1GCC 9.3.0ext41920x10804.6 Mesa 21.2.6GCC 9.4.0OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- A: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - B: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - C: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xea - Thermald 1.9.1 Java Details- A: OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.20.04)- B: OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.20.04)- C: OpenJDK Runtime Environment (build 11.0.14.1+1-Ubuntu-0ubuntu1.20.04)Python Details- Python 2.7.18 + Python 3.8.10Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

ABCResult OverviewPhoronix Test Suite100%102%105%107%OSPraySVT-AV1Memtier_benchmarkParallel BZIP2 Compressionlibavif avifencInfluxDBFacebook RocksDBTimed Gem5 CompilationTimed MPlayer Compilationlibgav1AOM AV1dav1doneDNNJava JMHOSPray Studio

i7 8700K Aprilospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeaom-av1: Speed 0 Two-Pass - Bosphorus 4Kospray: gravity_spheres_volume/dim_512/pathtracer/real_timesvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 10 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 4Kospray: particle_volume/pathtracer/real_timeospray: particle_volume/ao/real_timesvt-av1: Preset 8 - Bosphorus 1080procksdb: Seq Fillsvt-av1: Preset 8 - Bosphorus 4Kavifenc: 0svt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Ksvt-av1: Preset 10 - Bosphorus 1080procksdb: Read While Writingospray: particle_volume/scivis/real_timeavifenc: 10, Losslessrocksdb: Rand Readmemtier-benchmark: Redisaom-av1: Speed 0 Two-Pass - Bosphorus 1080pcompress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionaom-av1: Speed 9 Realtime - Bosphorus 1080procksdb: Read Rand Write Randrocksdb: Rand Fillonednn: IP Shapes 3D - u8s8f32 - CPUinfluxdb: 4 - 10000 - 2,5000,1 - 10000rocksdb: Update Randrocksdb: Rand Fill Syncaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Klibgav1: Summer Nature 1080ponednn: Deconvolution Batch shapes_1d - f32 - CPUaom-av1: Speed 8 Realtime - Bosphorus 1080pinfluxdb: 64 - 10000 - 2,5000,1 - 10000avifenc: 6onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUavifenc: 6, Losslessospray-studio: 1 - 1080p - 32 - Path Tracerbuild-mplayer: Time To Compileospray-studio: 2 - 1080p - 16 - Path Traceraom-av1: Speed 4 Two-Pass - Bosphorus 4Klibgav1: Summer Nature 4Kbuild-gem5: Time To Compileospray-studio: 1 - 1080p - 16 - Path Tracerospray-studio: 2 - 1080p - 1 - Path Traceraom-av1: Speed 4 Two-Pass - Bosphorus 1080pdav1d: Summer Nature 4Konednn: IP Shapes 3D - f32 - CPUospray-studio: 3 - 1080p - 32 - Path Traceraom-av1: Speed 6 Two-Pass - Bosphorus 4Kdav1d: Summer Nature 1080ponednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUaom-av1: Speed 10 Realtime - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 1080pospray-studio: 2 - 1080p - 32 - Path Traceronednn: IP Shapes 1D - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUlibgav1: Chimera 1080p 10-bitdav1d: Chimera 1080ponednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUavifenc: 2dav1d: Chimera 1080p 10-bitospray-studio: 1 - 1080p - 1 - Path Traceronednn: Convolution Batch Shapes Auto - u8s8f32 - CPUaom-av1: Speed 6 Two-Pass - Bosphorus 1080pospray-studio: 3 - 1080p - 1 - Path Traceronednn: Recurrent Neural Network Inference - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUjava-jmh: Throughputonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUospray-studio: 3 - 1080p - 16 - Path Tracerlibgav1: Chimera 1080ponednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUABC1.694411.669670.12.4654.38343.8381.258145.87512.120365.33378088317.12236.141285.41673.46112.01135.324135814811.86728.489310785101468038.510.2512.086107.3111673515722282.30231963386.2334854106432.8652.2749.21219.2211.661881.641116777.515.6193.9370419.72313240553.815744473.5157.73644.1987225038317.77150.189.508491586706.54621.121.87948110.349.441358634.391679.1390255.51489.892.766665.650472271.65107.776407.04370616.024419.6145442272.151.878224029.5212879627174.12517.14292276.5285875179.914030.584031.351.519371.498660.092.219144.00843.8361.371135.57911.238660.80879012018.472253.438265.89168.71312.21127.137128471711.26298.562325698731407387.650.2612.506106.6211378115657652.26695944999.5338511105233.3552.8649.48221.3111.553980.91111601.815.7523.9626419.85713159854.08748523.5258.05647.7537249238337.77150.929.488521594176.57622.151.87125109.879.441364054.377129.1107255.56490.22.772615.647992276.31107.813407.72370916.006719.5945472274.531.876784033.3212892791889.30717.13192274.3285859179.894029.674030.621.695681.658160.12.444094.34147.9251.368147.66212.217865.96684611818.509234.466284.42673.33511.43134.865129591111.86298.909325347351406279.360.2612.522103.9611678135585322.31948949457.3332057104733.0552.2348.98220.3511.571781.5112161015.6643.9300519.87713193854.129744203.558.03647.7487209938517.81150.749.463121589466.55623.931.87148110.359.481360054.384249.1270155.68488.942.770835.638642272.16107.598407.49371215.999419.5845502275.151.878844028.9612885502940.30617.12582274.3985808180.034028.864029.68OpenBenchmarking.org

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/ao/real_timeABC0.38150.7631.14451.5261.90751.694411.519371.69568

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeABC0.37570.75141.12711.50281.87851.669671.498661.65816

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KABC0.02250.0450.06750.090.11250.100.090.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeABC0.55461.10921.66382.21842.7732.465002.219142.44409

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pABC0.98621.97242.95863.94484.9314.3834.0084.3411. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 10 - Input: Bosphorus 4KABC112233445543.8443.8447.931. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 4 - Input: Bosphorus 4KABC0.30850.6170.92551.2341.54251.2581.3711.3681. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/pathtracer/real_timeABC306090120150145.88135.58147.66

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/ao/real_timeABC369121512.1211.2412.22

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pABC153045607565.3360.8165.971. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Sequential FillABC200K400K600K800K1000K7808837901208461181. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 8 - Input: Bosphorus 4KABC51015202517.1218.4718.511. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0ABC60120180240300236.14253.44234.471. (CXX) g++ options: -O3 -fPIC -lm

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pABC60120180240300285.42265.89284.431. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 12 - Input: Bosphorus 4KABC163248648073.4668.7173.341. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KABC369121512.0112.2111.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.0Encoder Mode: Preset 10 - Input: Bosphorus 1080pABC306090120150135.32127.14134.871. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While WritingABC300K600K900K1200K1500K1358148128471712959111. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/scivis/real_timeABC369121511.8711.2611.86

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessABC2468108.4898.5628.9091. (CXX) g++ options: -O3 -fPIC -lm

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random ReadABC7M14M21M28M35M3107851032569873325347351. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool. This current test profile currently just stresses the Redis protocol and basic options exposed wotj a 1:1 Set/Get ratio, 30 pipeline, 100 clients per thread, and thread count equal to the number of CPU cores/threads present. Patches to extend the test are welcome as always. Currently this test profile uses Memtier_benchmark 1.3 and Redis 6. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemtier_benchmark 1.3Protocol: RedisABC300K600K900K1200K1500K1468038.511407387.651406279.361. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pABC0.05850.1170.17550.2340.29250.250.260.261. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Parallel BZIP2 Compression

This test measures the time needed to compress a file (FreeBSD-13.0-RELEASE-amd64-memstick.img) using Parallel BZIP2 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img CompressionABC369121512.0912.5112.521. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pABC20406080100107.31106.62103.961. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write RandomABC300K600K900K1200K1500K1167351113781111678131. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random FillABC120K240K360K480K600K5722285657655585321. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUABC0.52191.04381.56572.08762.60952.302312.266952.31948MIN: 2.17MIN: 2.17MIN: 2.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000ABC200K400K600K800K1000K963386.2944999.5949457.3

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update RandomABC70K140K210K280K350K3348543385113320571. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random Fill SyncABC20040060080010001064105210471. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KABC81624324032.8633.3533.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KABC122436486052.2752.8652.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KABC112233445549.2149.4848.981. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Summer Nature 1080pABC50100150200250219.22221.31220.351. (CXX) g++ options: -O3 -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUABC369121511.6611.5511.57MIN: 7.83MIN: 7.84MIN: 7.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pABC2040608010081.6480.9081.501. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000ABC200K400K600K800K1000K1116777.51111601.81121610.0

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6ABC4812162015.6215.7515.661. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUABC0.89161.78322.67483.56644.4583.937043.962643.93005MIN: 3.88MIN: 3.91MIN: 3.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessABC51015202519.7219.8619.881. (CXX) g++ options: -O3 -fPIC -lm

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerABC30K60K90K120K150K1324051315981319381. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileABC122436486053.8254.0854.13

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerABC16K32K48K64K80K7444774852744201. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KABC0.7921.5842.3763.1683.963.513.523.501. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Summer Nature 4KABC132639526557.7358.0558.031. (CXX) g++ options: -O3 -lpthread -lrt

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To CompileABC140280420560700644.20647.75647.75

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerABC16K32K48K64K80K7225072492720991. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerABC80016002400320040003831383338511. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pABC2468107.777.777.811. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Summer Nature 4KABC306090120150150.18150.92150.741. (CC) gcc options: -pthread -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUABC36912159.508499.488529.46312MIN: 9.38MIN: 9.38MIN: 9.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerABC30K60K90K120K150K1586701594171589461. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KABC2468106.546.576.551. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Summer Nature 1080pABC130260390520650621.12622.15623.931. (CC) gcc options: -pthread -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUABC0.42290.84581.26871.69162.11451.879481.871251.87148MIN: 1.83MIN: 1.83MIN: 1.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pABC20406080100110.34109.87110.351. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pABC36912159.449.449.481. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerABC30K60K90K120K150K1358631364051360051. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUABC0.98811.97622.96433.95244.94054.391674.377124.38424MIN: 4.32MIN: 4.31MIN: 4.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUABC36912159.139029.110729.12701MIN: 9.11MIN: 9.07MIN: 9.091. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Chimera 1080p 10-bitABC132639526555.5155.5655.681. (CXX) g++ options: -O3 -lpthread -lrt

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Chimera 1080pABC110220330440550489.89490.20488.941. (CC) gcc options: -pthread -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUABC0.62381.24761.87142.49523.1192.766662.772612.77083MIN: 2.74MIN: 2.74MIN: 2.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUABC1.27142.54283.81425.08566.3575.650475.647995.63864MIN: 5.63MIN: 5.63MIN: 5.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUABC50010001500200025002271.652276.312272.16MIN: 2270.17MIN: 2272.74MIN: 2270.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2ABC20406080100107.78107.81107.601. (CXX) g++ options: -O3 -fPIC -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Chimera 1080p 10-bitABC90180270360450407.04407.72407.491. (CC) gcc options: -pthread -lm

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerABC80016002400320040003706370937121. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUABC4812162016.0216.0116.00MIN: 15.94MIN: 15.93MIN: 15.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pABC51015202519.6119.5919.581. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerABC100020003000400050004544454745501. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUABC50010001500200025002272.152274.532275.15MIN: 2270.79MIN: 2272.95MIN: 2272.941. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUABC0.42270.84541.26811.69082.11351.878221.876781.87884MIN: 1.86MIN: 1.86MIN: 1.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUABC90018002700360045004029.524033.324028.96MIN: 4025.41MIN: 4029.19MIN: 4026.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Java JMH

This very basic test profile runs the stock benchmark of the Java JMH benchmark via Maven. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/s, More Is BetterJava JMHThroughputABC3000M6000M9000M12000M15000M12879627174.1312892791889.3112885502940.31

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUABC4812162017.1417.1317.13MIN: 17.01MIN: 17MIN: 171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUABC50010001500200025002276.522274.322274.39MIN: 2274.85MIN: 2272.45MIN: 2272.421. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerABC20K40K60K80K100K8587585859858081. (CXX) g++ options: -O3 -lm -ldl -lpthread -pthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Chimera 1080pABC4080120160200179.91179.89180.031. (CXX) g++ options: -O3 -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUABC90018002700360045004030.584029.674028.86MIN: 4026.56MIN: 4027.71MIN: 4026.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUABC90018002700360045004031.354030.624029.68MIN: 4027.48MIN: 4026.66MIN: 4026.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

Concurrent Requests: 500

A: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

B: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

C: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

Concurrent Requests: 200

A: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

B: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

C: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

Concurrent Requests: 100

A: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

B: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

C: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

Concurrent Requests: 20

A: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

B: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

C: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

Concurrent Requests: 1

A: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

B: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

C: The test quit with a non-zero exit status. E: ./nginx: 2: /go/bin/bombardier: not found

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

82 Results Shown

OSPray:
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/scivis/real_time
AOM AV1
OSPray
SVT-AV1:
  Preset 4 - Bosphorus 1080p
  Preset 10 - Bosphorus 4K
  Preset 4 - Bosphorus 4K
OSPray:
  particle_volume/pathtracer/real_time
  particle_volume/ao/real_time
SVT-AV1
Facebook RocksDB
SVT-AV1
libavif avifenc
SVT-AV1:
  Preset 12 - Bosphorus 1080p
  Preset 12 - Bosphorus 4K
AOM AV1
SVT-AV1
Facebook RocksDB
OSPray
libavif avifenc
Facebook RocksDB
Memtier_benchmark
AOM AV1
Parallel BZIP2 Compression
AOM AV1
Facebook RocksDB:
  Read Rand Write Rand
  Rand Fill
oneDNN
InfluxDB
Facebook RocksDB:
  Update Rand
  Rand Fill Sync
AOM AV1:
  Speed 8 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 4K
  Speed 9 Realtime - Bosphorus 4K
libgav1
oneDNN
AOM AV1
InfluxDB
libavif avifenc
oneDNN
libavif avifenc
OSPray Studio
Timed MPlayer Compilation
OSPray Studio
AOM AV1
libgav1
Timed Gem5 Compilation
OSPray Studio:
  1 - 1080p - 16 - Path Tracer
  2 - 1080p - 1 - Path Tracer
AOM AV1
dav1d
oneDNN
OSPray Studio
AOM AV1
dav1d
oneDNN
AOM AV1:
  Speed 10 Realtime - Bosphorus 1080p
  Speed 6 Realtime - Bosphorus 1080p
OSPray Studio
oneDNN:
  IP Shapes 1D - f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
libgav1
dav1d
oneDNN:
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
libavif avifenc
dav1d
OSPray Studio
oneDNN
AOM AV1
OSPray Studio
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
Java JMH
oneDNN:
  Convolution Batch Shapes Auto - f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
OSPray Studio
libgav1
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU