test

lxc testing on Debian GNU/Linux 12 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2411103-NE-TEST5907621
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

AV1 3 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 4 Tests
C++ Boost Tests 4 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 6 Tests
C/C++ Compiler Tests 16 Tests
Compression Tests 2 Tests
CPU Massive 42 Tests
Creator Workloads 14 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 4 Tests
Database Test Suite 5 Tests
Encoding 6 Tests
Fortran Tests 4 Tests
HPC - High Performance Computing 38 Tests
Java 2 Tests
Common Kernel Benchmarks 3 Tests
Large Language Models 2 Tests
Machine Learning 27 Tests
Molecular Dynamics 5 Tests
MPI Benchmarks 3 Tests
Multi-Core 31 Tests
NVIDIA GPU Compute 7 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 8 Tests
Programmer / Developer System Benchmarks 9 Tests
Python 6 Tests
Raytracing 2 Tests
Renderers 4 Tests
Rust Tests 2 Tests
Scientific Computing 7 Tests
Server 7 Tests
Server CPU Tests 37 Tests
Single-Threaded 8 Tests
Speech 2 Tests
Telephony 2 Tests
Video Encoding 6 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
r1
November 04
  23 Minutes
2 x Intel Xeon E5-2680 v4
November 05
  3 Days, 20 Hours, 49 Minutes
2 x Intel Xeon E5-2680 v4 - mgag200drmfb - Dell
November 06
  1 Minute
Invert Hiding All Results Option
  1 Day, 7 Hours, 4 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


testOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon E5-2680 v4 @ 3.30GHz (28 Cores / 52 Threads)2 x Intel Xeon E5-2680 v4 @ 3.30GHz (28 Cores / 56 Threads)Dell PowerEdge R630 02C2CP (2.19.0 BIOS)98GB1000GB TOSHIBA MQ01ABD1mgag200drmfbDebian GNU/Linux 126.8.12-3-pve (x86_64)GCC 12.2.0ext41600x1200lxcProcessorsMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen ResolutionSystem LayerTest BenchmarksSystem Logs- Transparent Huge Pages: madvise- r1, 2 x Intel Xeon E5-2680 v4: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_cpufreq performance - CPU Microcode: 0xb000040- gather_data_sampling: Not affected + itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; IBRS_FW; STIBP: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable - 2 x Intel Xeon E5-2680 v4: OpenJDK Runtime Environment (build 17.0.13+11-Debian-2deb12u1) - 2 x Intel Xeon E5-2680 v4: Python 3.11.2

testwhisper-cpp: ggml-medium.en - 2016 State of the Unionwhisper-cpp: ggml-small.en - 2016 State of the Unionwhisper-cpp: ggml-base.en - 2016 State of the Unionnumenta-nab: Contextual Anomaly Detector OSEnumenta-nab: Bayesian Changepointnumenta-nab: Earthgecko Skylinenumenta-nab: Windowed Gaussiannumenta-nab: Relative Entropynumenta-nab: KNN CADopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUxnnpack: QS8MobileNetV2xnnpack: FP16MobileNetV3Smallxnnpack: FP16MobileNetV3Largexnnpack: FP16MobileNetV1xnnpack: FP32MobileNetV3Smallxnnpack: FP32MobileNetV3Largexnnpack: FP32MobileNetV1tnn: CPU - SqueezeNet v1.1tnn: CPU - SqueezeNet v2tnn: CPU - MobileNet v2tnn: CPU - DenseNetncnn: Vulkan GPU - FastestDetncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPUv2-yolov3v2-yolov3 - mobilenetv2-yolov3ncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetncnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPUv2-yolov3v2-yolov3 - mobilenetv2-yolov3ncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - mobilenetmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: mobilenetV3mnn: nasnetdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamtensorflow: GPU - 16 - AlexNettensorflow: CPU - 64 - AlexNettensorflow: CPU - 512 - VGG-16tensorflow: CPU - 32 - AlexNettensorflow: CPU - 256 - VGG-16tensorflow: CPU - 16 - AlexNettensorflow: GPU - 64 - VGG-16tensorflow: GPU - 32 - VGG-16tensorflow: GPU - 16 - VGG-16tensorflow: GPU - 1 - AlexNettensorflow: CPU - 64 - VGG-16tensorflow: CPU - 32 - VGG-16tensorflow: CPU - 16 - VGG-16tensorflow: CPU - 1 - AlexNettensorflow: GPU - 1 - VGG-16tensorflow: CPU - 1 - VGG-16pytorch: CPU - 512 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 512 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 64 - ResNet-152pytorch: CPU - 512 - ResNet-50pytorch: CPU - 32 - ResNet-152pytorch: CPU - 256 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 64 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 16 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 1 - ResNet-50tensorflow-lite: Mobilenet Quanttensorflow-lite: Mobilenet Floattensorflow-lite: Inception V4tensorflow-lite: SqueezeNetrnnoise: 26 Minute Long Talking Samplerbenchmark: deepspeech: CPUnumpy: onednn: Recurrent Neural Network Inference - CPUonednn: Recurrent Neural Network Training - CPUonednn: Deconvolution Batch shapes_3d - CPUonednn: Deconvolution Batch shapes_1d - CPUonednn: Convolution Batch Shapes Auto - CPUonednn: IP Shapes 3D - CPUonednn: IP Shapes 1D - CPUcpuminer-opt: Triple SHA-256, Onecoincpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: LBC, LBRY Creditscpuminer-opt: Myriad-Groestlcpuminer-opt: Skeincoincpuminer-opt: Garlicoincpuminer-opt: Blake-2 Scpuminer-opt: Ringcoincpuminer-opt: Deepcoincpuminer-opt: scryptcpuminer-opt: x20rcpuminer-opt: Magibuild-llvm: Unix Makefilesbuild-llvm: Ninjaxmrig: CryptoNight-Femto UPX2 - 1Mxmrig: CryptoNight-Heavy - 1Mxmrig: GhostRider - 1Mxmrig: Wownero - 1Mxmrig: Monero - 1Mxmrig: KawPow - 1Minfluxdb: 1024 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000influxdb: 4 - 10000 - 2,5000,1 - 10000apache-siege: 1000apache-siege: 500apache-siege: 200apache-siege: 100apache-siege: 50apache-siege: 10memtier-benchmark: Redis - 100 - 1:10memtier-benchmark: Redis - 100 - 10:1memtier-benchmark: Redis - 50 - 1:10memtier-benchmark: Redis - 50 - 10:1memtier-benchmark: Redis - 100 - 5:1memtier-benchmark: Redis - 100 - 1:5memtier-benchmark: Redis - 100 - 1:1memtier-benchmark: Redis - 50 - 5:1memtier-benchmark: Redis - 50 - 1:5memtier-benchmark: Redis - 50 - 1:1redis: LPUSH - 1000redis: SADD - 1000redis: LPUSH - 500redis: SET - 1000redis: SADD - 500redis: LPUSH - 50redis: GET - 1000redis: SET - 500redis: SADD - 50redis: LPOP - 50redis: GET - 500redis: SET - 50redis: GET - 50memcached: 1:100memcached: 1:10memcached: 5:1memcached: 1:5memcached: 1:1avifenc: 10, Losslessavifenc: 6, Losslessavifenc: 6avifenc: 2avifenc: 0compress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 8, Long Mode - Compression Speedcompress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Compression Speedcompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedcompress-zstd: 12 - Decompression Speedcompress-zstd: 12 - Compression Speedcompress-zstd: 8 - Decompression Speedcompress-zstd: 8 - Compression Speedcompress-zstd: 3 - Decompression Speedcompress-zstd: 3 - Compression Speedbuild-nodejs: Time To Compilebuild-wasmer: Time To Compileblender: Pabellon Barcelona - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Classroom - CPU-Onlyblender: Junkshop - CPU-Onlyblender: BMW27 - CPU-Onlyaom-av1: Speed 11 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 1080paom-av1: Speed 6 Two-Pass - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 4 Two-Pass - Bosphorus 1080paom-av1: Speed 0 Two-Pass - Bosphorus 1080paom-av1: Speed 11 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 4 Two-Pass - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 4Kaircrack-ng: openradioss: Bird Strike on Windshieldopenradioss: Cell Phone Drop Testopenradioss: Bumper Beamincompact3d: input.i3d 193 Cells Per Directionincompact3d: input.i3d 129 Cells Per Directionincompact3d: X3D-benchmarking input.i3dpennant: leblancbigpennant: sedovbigamg: quantlib: XXSquantlib: Snamd: STMV with 1,066,628 Atomsnamd: ATPase with 327,506 Atomsclomp: Static OMP Speeduprodinia: OpenMP Streamclusterrodinia: OpenMP CFD Solverrodinia: OpenMP Leukocyterodinia: OpenMP HotSpot3Dminibude: OpenMP - BM2minibude: OpenMP - BM2minibude: OpenMP - BM1minibude: OpenMP - BM1lczero: BLASpybench: Total For Average Test Timesctx-clock: Context Switch Timem-queens: Time To Solvehackbench: 32 - Processcython-bench: N-Queenspovray: Trace Timec-ray: Total Time - 4K, 16 Rays Per Pixelbuild-php: Time To Compilebuild-gcc: Time To Compileasmfish: 1024 Hash Memory, 26 Depthcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratinghimeno: Poisson Pressure Solverjohn-the-ripper: Blowfishrenaissance: Savina Reactors.IOdacapobench: Tradebeansdacapobench: Jythonrodinia: OpenMP LavaMDnpb: LU.Cnpb: EP.Ccachebench: Read / Modify / Writecachebench: Writecachebench: Readopencv: DNN - Deep Neural Networkxnnpack: FP16MobileNetV2xnnpack: FP32MobileNetV2ncnn: Vulkan GPU - blazefacencnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2mnn: squeezenetv1.1tensorflow-lite: Inception ResNet V2tensorflow-lite: NASNet Mobileredis: LPOP - 1000redis: LPOP - 500aom-av1: Speed 9 Realtime - Bosphorus 1080plczero: Eigenstockfish: Total Timecp2k: Fayalite-FIST Datar12 x Intel Xeon E5-2680 v42 x Intel Xeon E5-2680 v4 - mgag200drmfb - Dell65744.17658936891.3297088361.6285391539.31338543.88771202.2334843.61645.695110.0108.54321.617169.2781.3021265.92130.16214.881.7615819.8212.84699.9953.49168.0915.74567.3221.23422.9640.46691.42162.5955.2644.46202.266.911299.6518.55484.3920.66435.2068.76130.776.501379.831352.536.5826.15343.83179.4250.09182.1949.331675.235.293293347751192663367449332299351.57493.729386.7843758.3189.5682.0733.7020.4632.0719.6024.069.8111.4742.4518.7511.317.078.667.6719.6010.0982.4033.5219.8532.2219.5623.769.4811.5841.4618.363.7611.227.068.797.4919.5638.3803.7724.9477.97626.9603.50422.034122.20638.1818805.371117.286322.245044.911481.9707170.6494126.92127.8773880.920815.854017.245657.935193.1582150.082324.042541.5610146.339395.592912.776378.158071.7084195.0573314.77583.17659614.47241.28272.3685420.966213.67951022.055612.732078.430771.8548194.66559.9602100.245840.4752345.5285121.77248.2109806.622817.218610.83102.696.7688.376.7571.300.920.900.853.766.496.345.997.490.792.054.214.454.324.294.355.648.468.518.4022.138.3421.988.3022.0022.2622.1310.3227.493904.202491.0334692.43967.0618.1880.2279193.77603270.371018.671845.755.2170613.165013.995412.04473.363556498045963108838846.55297802995.141242402590.176558.28208.915936.61474.91604.370497.4578252.78230.41582.912688.78225.48243.6966559.2938427.0281287.420260.7320425.8220413.9821617.0922369.7023642.071481597.001277786.021517459.591331731.411303976.381480526.551365973.231361024.381609507.711524635.991236129.421571264.331251818.181407706.521542038.121390421.081783987.961403511.361719230.852203333.821769970.881574262.852246300.923195806.543191700.60809259.922992050.821268136.648.38811.9276.73179.415150.332592.46.19686.6495.7730.5585.4578.711.3626.5133.6668.3521.7682.61436.42052.79876.409233.47736.44105.76215.3199.2272.2571.6371.4866.0431.7065.1211.710.6532.9132.8531.8329.2912.0031.265.360.2172471.815230.4875.12143.2446.690498310.64937811163.9602321.8240738.2196070264723310.946210.43600.328701.0926825.613.74711.15666.187141.43220.623515.57819.506487.662651161106336.85757.43432.48723.7620.61878.3831491.164623665731227601374223324.2388743066111385.6191578144133.48046814.612736.2250567368537253.907.687.935.82542979.141794.01785408.101955391.5469.785443261094OpenBenchmarking.org

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-medium.en - Input: 2016 State of the Union2 x Intel Xeon E5-2680 v430060090012001500SE +/- 3.76, N = 31539.311. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-small.en - Input: 2016 State of the Union2 x Intel Xeon E5-2680 v4120240360480600SE +/- 3.14, N = 3543.891. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-base.en - Input: 2016 State of the Union2 x Intel Xeon E5-2680 v44080120160200SE +/- 1.87, N = 3202.231. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSE2 x Intel Xeon E5-2680 v41020304050SE +/- 0.43, N = 343.62

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian Changepoint2 x Intel Xeon E5-2680 v41020304050SE +/- 0.55, N = 345.70

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko Skyline2 x Intel Xeon E5-2680 v420406080100SE +/- 0.33, N = 3110.01

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed Gaussian2 x Intel Xeon E5-2680 v4246810SE +/- 0.070, N = 158.543

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative Entropy2 x Intel Xeon E5-2680 v4510152025SE +/- 0.24, N = 421.62

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CAD2 x Intel Xeon E5-2680 v44080120160200SE +/- 1.37, N = 3169.28

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v40.29250.5850.87751.171.4625SE +/- 0.00, N = 31.30MIN: 1.27 / MAX: 10.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v45K10K15K20K25KSE +/- 19.67, N = 321265.921. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v4306090120150SE +/- 0.36, N = 3130.16MIN: 116.02 / MAX: 172.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v450100150200250SE +/- 0.61, N = 3214.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU2 x Intel Xeon E5-2680 v40.3960.7921.1881.5841.98SE +/- 0.01, N = 31.76MIN: 1.68 / MAX: 9.161. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU2 x Intel Xeon E5-2680 v43K6K9K12K15KSE +/- 56.67, N = 315819.821. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPU2 x Intel Xeon E5-2680 v43691215SE +/- 0.01, N = 312.84MIN: 12.15 / MAX: 23.491. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPU2 x Intel Xeon E5-2680 v4150300450600750SE +/- 0.71, N = 3699.991. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPU2 x Intel Xeon E5-2680 v41224364860SE +/- 0.04, N = 353.49MIN: 49.19 / MAX: 73.081. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPU2 x Intel Xeon E5-2680 v44080120160200SE +/- 0.13, N = 3168.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPU2 x Intel Xeon E5-2680 v448121620SE +/- 0.01, N = 315.74MIN: 13.82 / MAX: 29.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPU2 x Intel Xeon E5-2680 v4120240360480600SE +/- 0.35, N = 3567.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPU2 x Intel Xeon E5-2680 v4510152025SE +/- 0.00, N = 321.23MIN: 17.51 / MAX: 33.531. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPU2 x Intel Xeon E5-2680 v490180270360450SE +/- 0.06, N = 3422.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v4918273645SE +/- 0.03, N = 340.46MIN: 39.84 / MAX: 47.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v4150300450600750SE +/- 0.53, N = 3691.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPU2 x Intel Xeon E5-2680 v44080120160200SE +/- 0.63, N = 3162.59MIN: 131.74 / MAX: 222.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPU2 x Intel Xeon E5-2680 v41224364860SE +/- 0.22, N = 355.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v41020304050SE +/- 0.06, N = 344.46MIN: 42.03 / MAX: 58.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v44080120160200SE +/- 0.29, N = 3202.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v4246810SE +/- 0.00, N = 36.91MIN: 6.78 / MAX: 13.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v430060090012001500SE +/- 0.44, N = 31299.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPU2 x Intel Xeon E5-2680 v4510152025SE +/- 0.00, N = 318.55MIN: 17.41 / MAX: 36.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPU2 x Intel Xeon E5-2680 v4100200300400500SE +/- 0.11, N = 3484.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v4510152025SE +/- 0.01, N = 320.66MIN: 20.17 / MAX: 30.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v490180270360450SE +/- 0.12, N = 3435.201. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPU2 x Intel Xeon E5-2680 v41530456075SE +/- 0.38, N = 368.76MIN: 43.42 / MAX: 111.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPU2 x Intel Xeon E5-2680 v4306090120150SE +/- 0.72, N = 3130.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPU2 x Intel Xeon E5-2680 v4246810SE +/- 0.02, N = 36.50MIN: 5.8 / MAX: 20.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPU2 x Intel Xeon E5-2680 v430060090012001500SE +/- 4.45, N = 31379.831. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v430060090012001500SE +/- 0.25, N = 31352.53MIN: 1293.5 / MAX: 1386.351. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPU2 x Intel Xeon E5-2680 v4246810SE +/- 0.01, N = 36.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPU2 x Intel Xeon E5-2680 v4612182430SE +/- 0.32, N = 326.15MIN: 19.52 / MAX: 45.161. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPU2 x Intel Xeon E5-2680 v470140210280350SE +/- 4.27, N = 3343.831. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPU2 x Intel Xeon E5-2680 v44080120160200SE +/- 1.19, N = 3179.42MIN: 153.31 / MAX: 227.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPU2 x Intel Xeon E5-2680 v41122334455SE +/- 0.33, N = 350.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPU2 x Intel Xeon E5-2680 v44080120160200SE +/- 0.99, N = 3182.19MIN: 143.44 / MAX: 226.911. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPU2 x Intel Xeon E5-2680 v41122334455SE +/- 0.27, N = 349.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPU2 x Intel Xeon E5-2680 v4400800120016002000SE +/- 0.53, N = 31675.23MIN: 1524.28 / MAX: 1770.41. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPU2 x Intel Xeon E5-2680 v41.19032.38063.57094.76125.9515SE +/- 0.01, N = 35.291. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: QS8MobileNetV22 x Intel Xeon E5-2680 v47001400210028003500SE +/- 33.39, N = 332931. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Small2 x Intel Xeon E5-2680 v47001400210028003500SE +/- 44.00, N = 334771. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Large2 x Intel Xeon E5-2680 v411002200330044005500SE +/- 3.84, N = 351191. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV12 x Intel Xeon E5-2680 v46001200180024003000SE +/- 54.55, N = 326631. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Small2 x Intel Xeon E5-2680 v48001600240032004000SE +/- 120.36, N = 336741. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Large2 x Intel Xeon E5-2680 v411002200330044005500SE +/- 109.39, N = 349331. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV12 x Intel Xeon E5-2680 v45001000150020002500SE +/- 19.22, N = 322991. (CXX) g++ options: -O3 -lrt -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.12 x Intel Xeon E5-2680 v480160240320400SE +/- 0.19, N = 3351.57MIN: 348.06 / MAX: 361.441. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v22 x Intel Xeon E5-2680 v420406080100SE +/- 0.05, N = 393.73MIN: 92.89 / MAX: 102.531. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v22 x Intel Xeon E5-2680 v480160240320400SE +/- 1.50, N = 3386.78MIN: 377.08 / MAX: 410.11. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet2 x Intel Xeon E5-2680 v48001600240032004000SE +/- 13.71, N = 33758.32MIN: 3547 / MAX: 3972.771. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDet2 x Intel Xeon E5-2680 v43691215SE +/- 0.10, N = 39.56MIN: 8.52 / MAX: 49.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformer2 x Intel Xeon E5-2680 v420406080100SE +/- 0.29, N = 382.07MIN: 77.72 / MAX: 211.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400m2 x Intel Xeon E5-2680 v4816243240SE +/- 0.16, N = 333.70MIN: 31.52 / MAX: 136.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssd2 x Intel Xeon E5-2680 v4510152025SE +/- 0.44, N = 320.46MIN: 18.81 / MAX: 75.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tiny2 x Intel Xeon E5-2680 v4714212835SE +/- 0.18, N = 332.07MIN: 29.66 / MAX: 113.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov32 x Intel Xeon E5-2680 v4510152025SE +/- 0.27, N = 319.60MIN: 18.34 / MAX: 116.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet502 x Intel Xeon E5-2680 v4612182430SE +/- 0.32, N = 324.06MIN: 22.21 / MAX: 124.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnet2 x Intel Xeon E5-2680 v43691215SE +/- 0.26, N = 39.81MIN: 9.08 / MAX: 107.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet182 x Intel Xeon E5-2680 v43691215SE +/- 0.21, N = 311.47MIN: 10.74 / MAX: 65.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg162 x Intel Xeon E5-2680 v41020304050SE +/- 0.15, N = 342.45MIN: 38.84 / MAX: 131.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenet2 x Intel Xeon E5-2680 v4510152025SE +/- 0.09, N = 318.75MIN: 17.58 / MAX: 61.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b02 x Intel Xeon E5-2680 v43691215SE +/- 0.13, N = 311.31MIN: 10.54 / MAX: 129.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnet2 x Intel Xeon E5-2680 v4246810SE +/- 0.19, N = 37.07MIN: 6.55 / MAX: 55.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v22 x Intel Xeon E5-2680 v4246810SE +/- 0.12, N = 38.66MIN: 7.95 / MAX: 54.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v22 x Intel Xeon E5-2680 v4246810SE +/- 0.20, N = 37.67MIN: 7.11 / MAX: 73.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenet2 x Intel Xeon E5-2680 v4510152025SE +/- 0.27, N = 319.60MIN: 18.34 / MAX: 116.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDet2 x Intel Xeon E5-2680 v43691215SE +/- 0.12, N = 1210.09MIN: 8.51 / MAX: 179.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformer2 x Intel Xeon E5-2680 v420406080100SE +/- 0.25, N = 1282.40MIN: 76.17 / MAX: 439.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400m2 x Intel Xeon E5-2680 v4816243240SE +/- 0.27, N = 1233.52MIN: 30.18 / MAX: 387.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssd2 x Intel Xeon E5-2680 v4510152025SE +/- 0.19, N = 1219.85MIN: 16.83 / MAX: 168.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tiny2 x Intel Xeon E5-2680 v4714212835SE +/- 0.25, N = 1232.22MIN: 29.25 / MAX: 192.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPUv2-yolov3v2-yolov3 - Model: mobilenetv2-yolov32 x Intel Xeon E5-2680 v4510152025SE +/- 0.27, N = 1219.56MIN: 17.72 / MAX: 164.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet502 x Intel Xeon E5-2680 v4612182430SE +/- 0.35, N = 1223.76MIN: 20.94 / MAX: 184.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnet2 x Intel Xeon E5-2680 v43691215SE +/- 0.10, N = 129.48MIN: 8.34 / MAX: 111.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet182 x Intel Xeon E5-2680 v43691215SE +/- 0.08, N = 1211.58MIN: 10.7 / MAX: 110.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg162 x Intel Xeon E5-2680 v4918273645SE +/- 0.41, N = 1241.46MIN: 36.72 / MAX: 331.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenet2 x Intel Xeon E5-2680 v4510152025SE +/- 0.14, N = 1218.36MIN: 16.63 / MAX: 156.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazeface2 x Intel Xeon E5-2680 v40.8461.6922.5383.3844.23SE +/- 0.06, N = 123.76MIN: 3.22 / MAX: 97.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b02 x Intel Xeon E5-2680 v43691215SE +/- 0.11, N = 1211.22MIN: 10.09 / MAX: 196.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnet2 x Intel Xeon E5-2680 v4246810SE +/- 0.07, N = 127.06MIN: 5.98 / MAX: 52.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v22 x Intel Xeon E5-2680 v4246810SE +/- 0.11, N = 128.79MIN: 7.42 / MAX: 103.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v32 x Intel Xeon E5-2680 v4246810SE +/- 0.08, N = 127.49MIN: 6.65 / MAX: 69.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenet2 x Intel Xeon E5-2680 v4510152025SE +/- 0.27, N = 1219.56MIN: 17.72 / MAX: 164.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: inception-v32 x Intel Xeon E5-2680 v4918273645SE +/- 0.24, N = 938.38MIN: 29.94 / MAX: 109.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: mobilenet-v1-1.02 x Intel Xeon E5-2680 v40.84871.69742.54613.39484.2435SE +/- 0.024, N = 93.772MIN: 3.26 / MAX: 10.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: MobileNetV2_2242 x Intel Xeon E5-2680 v41.11312.22623.33934.45245.5655SE +/- 0.071, N = 94.947MIN: 3.92 / MAX: 8.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: SqueezeNetV1.02 x Intel Xeon E5-2680 v4246810SE +/- 0.154, N = 97.976MIN: 6.03 / MAX: 23.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: resnet-v2-502 x Intel Xeon E5-2680 v4612182430SE +/- 0.18, N = 926.96MIN: 24.27 / MAX: 169.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: mobilenetV32 x Intel Xeon E5-2680 v40.78841.57682.36523.15363.942SE +/- 0.036, N = 93.504MIN: 2.93 / MAX: 5.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: nasnet2 x Intel Xeon E5-2680 v4510152025SE +/- 0.35, N = 922.03MIN: 16.48 / MAX: 47.881. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v4306090120150SE +/- 0.27, N = 3122.21

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v4246810SE +/- 0.0178, N = 38.1818

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v42004006008001000SE +/- 1.38, N = 3805.37

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v448121620SE +/- 0.04, N = 317.29

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v4510152025SE +/- 0.07, N = 322.25

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v41020304050SE +/- 0.13, N = 344.91

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v420406080100SE +/- 0.07, N = 381.97

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v44080120160200SE +/- 0.13, N = 3170.65

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v4306090120150SE +/- 0.15, N = 3126.92

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v4246810SE +/- 0.0094, N = 37.8773

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v42004006008001000SE +/- 0.49, N = 3880.92

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v448121620SE +/- 0.02, N = 315.85

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v448121620SE +/- 0.07, N = 317.25

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v41326395265SE +/- 0.25, N = 357.94

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v420406080100SE +/- 0.23, N = 393.16

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v4306090120150SE +/- 0.36, N = 3150.08

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v4612182430SE +/- 0.06, N = 324.04

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v4918273645SE +/- 0.11, N = 341.56

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v4306090120150SE +/- 0.32, N = 3146.34

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v420406080100SE +/- 0.23, N = 395.59

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v43691215SE +/- 0.02, N = 312.78

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v420406080100SE +/- 0.13, N = 378.16

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v41632486480SE +/- 0.33, N = 371.71

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v44080120160200SE +/- 0.90, N = 3195.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v470140210280350SE +/- 0.72, N = 3314.78

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v40.71471.42942.14412.85883.5735SE +/- 0.0072, N = 33.1765

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v42K4K6K8K10KSE +/- 37.55, N = 39614.47

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v40.28860.57720.86581.15441.443SE +/- 0.0121, N = 31.2827

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v40.53291.06581.59872.13162.6645SE +/- 0.0060, N = 32.3685

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v490180270360450SE +/- 1.03, N = 3420.97

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v448121620SE +/- 0.03, N = 313.68

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v42004006008001000SE +/- 2.18, N = 31022.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v43691215SE +/- 0.01, N = 312.73

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v420406080100SE +/- 0.05, N = 378.43

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v41632486480SE +/- 0.27, N = 371.85

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v44080120160200SE +/- 0.67, N = 3194.67

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v43691215SE +/- 0.0282, N = 39.9602

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v420406080100SE +/- 0.28, N = 3100.25

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v4918273645SE +/- 0.07, N = 340.48

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v480160240320400SE +/- 0.63, N = 3345.53

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v4306090120150SE +/- 0.16, N = 3121.77

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream2 x Intel Xeon E5-2680 v4246810SE +/- 0.0110, N = 38.2109

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v42004006008001000SE +/- 2.74, N = 3806.62

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream2 x Intel Xeon E5-2680 v448121620SE +/- 0.04, N = 317.22

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: GPU - Batch Size: 16 - Model: AlexNet2 x Intel Xeon E5-2680 v43691215SE +/- 0.06, N = 310.83

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: AlexNet2 x Intel Xeon E5-2680 v420406080100SE +/- 0.40, N = 3102.69

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: VGG-162 x Intel Xeon E5-2680 v4246810SE +/- 0.03, N = 36.76

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: AlexNet2 x Intel Xeon E5-2680 v420406080100SE +/- 0.90, N = 388.37

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: VGG-162 x Intel Xeon E5-2680 v4246810SE +/- 0.01, N = 36.75

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: AlexNet2 x Intel Xeon E5-2680 v41632486480SE +/- 0.21, N = 371.30

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: GPU - Batch Size: 64 - Model: VGG-162 x Intel Xeon E5-2680 v40.2070.4140.6210.8281.0350.92

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: GPU - Batch Size: 32 - Model: VGG-162 x Intel Xeon E5-2680 v40.20250.4050.60750.811.0125SE +/- 0.01, N = 30.90

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: GPU - Batch Size: 16 - Model: VGG-162 x Intel Xeon E5-2680 v40.19130.38260.57390.76520.9565SE +/- 0.00, N = 30.85

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: GPU - Batch Size: 1 - Model: AlexNet2 x Intel Xeon E5-2680 v40.8461.6922.5383.3844.23SE +/- 0.02, N = 33.76

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: VGG-162 x Intel Xeon E5-2680 v4246810SE +/- 0.01, N = 36.49

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: VGG-162 x Intel Xeon E5-2680 v4246810SE +/- 0.01, N = 36.34

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: VGG-162 x Intel Xeon E5-2680 v41.34782.69564.04345.39126.739SE +/- 0.02, N = 35.99

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: AlexNet2 x Intel Xeon E5-2680 v4246810SE +/- 0.06, N = 37.49

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: GPU - Batch Size: 1 - Model: VGG-162 x Intel Xeon E5-2680 v40.17780.35560.53340.71120.889SE +/- 0.00, N = 30.79

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: VGG-162 x Intel Xeon E5-2680 v40.46130.92261.38391.84522.3065SE +/- 0.02, N = 32.05

PyTorch

This is a benchmark of PyTorch making use of pytorch-benchmark [https://github.com/LukasHedegaard/pytorch-benchmark]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l2 x Intel Xeon E5-2680 v40.94731.89462.84193.78924.7365SE +/- 0.04, N = 34.21MIN: 3.28 / MAX: 4.35

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l2 x Intel Xeon E5-2680 v41.00132.00263.00394.00525.0065SE +/- 0.04, N = 34.45MIN: 3.4 / MAX: 4.59

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l2 x Intel Xeon E5-2680 v40.9721.9442.9163.8884.86SE +/- 0.06, N = 94.32MIN: 3.4 / MAX: 4.6

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l2 x Intel Xeon E5-2680 v40.96531.93062.89593.86124.8265SE +/- 0.04, N = 94.29MIN: 3.33 / MAX: 4.53

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l2 x Intel Xeon E5-2680 v40.97881.95762.93643.91524.894SE +/- 0.04, N = 94.35MIN: 3.6 / MAX: 4.55

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l2 x Intel Xeon E5-2680 v41.2692.5383.8075.0766.345SE +/- 0.04, N = 35.64MIN: 4.26 / MAX: 6.2

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-1522 x Intel Xeon E5-2680 v4246810SE +/- 0.09, N = 98.46MIN: 1.24 / MAX: 8.92

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-1522 x Intel Xeon E5-2680 v4246810SE +/- 0.12, N = 38.51MIN: 6.24 / MAX: 8.82

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-1522 x Intel Xeon E5-2680 v4246810SE +/- 0.08, N = 98.40MIN: 6.58 / MAX: 8.89

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-502 x Intel Xeon E5-2680 v4510152025SE +/- 0.21, N = 322.13MIN: 19.45 / MAX: 22.71

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-1522 x Intel Xeon E5-2680 v4246810SE +/- 0.01, N = 38.34MIN: 6.58 / MAX: 8.48

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-502 x Intel Xeon E5-2680 v4510152025SE +/- 0.23, N = 1221.98MIN: 15.73 / MAX: 23.11

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-1522 x Intel Xeon E5-2680 v4246810SE +/- 0.06, N = 38.30MIN: 6.51 / MAX: 8.57

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-502 x Intel Xeon E5-2680 v4510152025SE +/- 0.23, N = 522.00MIN: 15.47 / MAX: 22.89

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-502 x Intel Xeon E5-2680 v4510152025SE +/- 0.28, N = 422.26MIN: 18.7 / MAX: 23.31

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-502 x Intel Xeon E5-2680 v4510152025SE +/- 0.07, N = 322.13MIN: 19.61 / MAX: 22.61

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-1522 x Intel Xeon E5-2680 v43691215SE +/- 0.07, N = 1210.32MIN: 6.9 / MAX: 11.18

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-502 x Intel Xeon E5-2680 v4612182430SE +/- 0.28, N = 527.49MIN: 16.84 / MAX: 29.04

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quant2 x Intel Xeon E5-2680 v48001600240032004000SE +/- 29.07, N = 33904.20

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Float2 x Intel Xeon E5-2680 v45001000150020002500SE +/- 35.88, N = 122491.03

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V42 x Intel Xeon E5-2680 v47K14K21K28K35KSE +/- 319.16, N = 1534692.4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNet2 x Intel Xeon E5-2680 v49001800270036004500SE +/- 67.64, N = 123967.06

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 0.2Input: 26 Minute Long Talking Sample2 x Intel Xeon E5-2680 v448121620SE +/- 0.26, N = 318.191. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

R Benchmark

This test is a quick-running survey of general R performance Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterR Benchmark2 x Intel Xeon E5-2680 v40.05130.10260.15390.20520.2565SE +/- 0.0016, N = 130.2279

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPU2 x Intel Xeon E5-2680 v44080120160200SE +/- 0.69, N = 3193.78

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark2 x Intel Xeon E5-2680 v460120180240300SE +/- 0.46, N = 3270.37

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Inference - Engine: CPU2 x Intel Xeon E5-2680 v42004006008001000SE +/- 7.40, N = 31018.67MIN: 952.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Training - Engine: CPU2 x Intel Xeon E5-2680 v4400800120016002000SE +/- 10.64, N = 31845.75MIN: 1764.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_3d - Engine: CPU2 x Intel Xeon E5-2680 v41.17382.34763.52144.69525.869SE +/- 0.04255, N = 155.21706MIN: 4.911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_1d - Engine: CPU2 x Intel Xeon E5-2680 v43691215SE +/- 0.13, N = 313.17MIN: 9.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Convolution Batch Shapes Auto - Engine: CPU2 x Intel Xeon E5-2680 v448121620SE +/- 0.02, N = 314.00MIN: 13.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 3D - Engine: CPU2 x Intel Xeon E5-2680 v43691215SE +/- 0.03, N = 312.04MIN: 11.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 1D - Engine: CPU2 x Intel Xeon E5-2680 v40.75681.51362.27043.02723.784SE +/- 0.00512, N = 33.36355MIN: 3.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Cpuminer-Opt

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 24.3Algorithm: Triple SHA-256, Onecoin2 x Intel Xeon E5-2680 v414K28K42K56K70KSE +/- 5.77, N = 3649801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 24.3Algorithm: Quad SHA-256, Pyrite2 x Intel Xeon E5-2680 v410K20K30K40K50KSE +/- 459.94, N = 3459631. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 24.3Algorithm: LBC, LBRY Credits2 x Intel Xeon E5-2680 v42K4K6K8K10KSE +/- 3.33, N = 3108831. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 24.3Algorithm: Myriad-Groestl2 x Intel Xeon E5-2680 v42K4K6K8K10KSE +/- 7.21, N = 38846.551. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 24.3Algorithm: Skeincoin2 x Intel Xeon E5-2680 v46K12K18K24K30KSE +/- 215.02, N = 3297801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 24.3Algorithm: Garlicoin2 x Intel Xeon E5-2680 v46001200180024003000SE +/- 17.79, N = 32995.141. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 24.3Algorithm: Blake-2 S2 x Intel Xeon E5-2680 v430K60K90K120K150KSE +/- 210.08, N = 31242401. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 24.3Algorithm: Ringcoin2 x Intel Xeon E5-2680 v46001200180024003000SE +/- 9.31, N = 32590.171. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 24.3Algorithm: Deepcoin2 x Intel Xeon E5-2680 v414002800420056007000SE +/- 2.36, N = 36558.281. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 24.3Algorithm: scrypt2 x Intel Xeon E5-2680 v450100150200250SE +/- 0.14, N = 3208.911. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 24.3Algorithm: x20r2 x Intel Xeon E5-2680 v413002600390052006500SE +/- 2.22, N = 35936.611. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 24.3Algorithm: Magi2 x Intel Xeon E5-2680 v4100200300400500SE +/- 0.82, N = 3474.911. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lgmp

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefiles2 x Intel Xeon E5-2680 v4130260390520650SE +/- 1.83, N = 3604.37

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninja2 x Intel Xeon E5-2680 v4110220330440550SE +/- 0.54, N = 3497.46

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1M2 x Intel Xeon E5-2680 v42K4K6K8K10KSE +/- 30.75, N = 38252.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1M2 x Intel Xeon E5-2680 v42K4K6K8K10KSE +/- 29.52, N = 38230.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1M2 x Intel Xeon E5-2680 v430060090012001500SE +/- 0.69, N = 31582.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1M2 x Intel Xeon E5-2680 v43K6K9K12K15KSE +/- 82.76, N = 312688.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1M2 x Intel Xeon E5-2680 v42K4K6K8K10KSE +/- 33.81, N = 38225.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1M2 x Intel Xeon E5-2680 v42K4K6K8K10KSE +/- 18.01, N = 38243.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 100002 x Intel Xeon E5-2680 v4200K400K600K800K1000KSE +/- 2360.15, N = 3966559.2

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 100002 x Intel Xeon E5-2680 v4200K400K600K800K1000KSE +/- 6591.71, N = 12938427.0

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 100002 x Intel Xeon E5-2680 v460K120K180K240K300KSE +/- 3039.46, N = 3281287.4

Apache Siege

OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.62Concurrent Users: 10002 x Intel Xeon E5-2680 v44K8K12K16K20KSE +/- 81.59, N = 320260.731. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz

OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.62Concurrent Users: 5002 x Intel Xeon E5-2680 v44K8K12K16K20KSE +/- 71.12, N = 320425.821. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz

OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.62Concurrent Users: 2002 x Intel Xeon E5-2680 v44K8K12K16K20KSE +/- 52.17, N = 320413.981. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz

OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.62Concurrent Users: 1002 x Intel Xeon E5-2680 v45K10K15K20K25KSE +/- 38.51, N = 321617.091. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz

OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.62Concurrent Users: 502 x Intel Xeon E5-2680 v45K10K15K20K25KSE +/- 8.83, N = 322369.701. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz

OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.62Concurrent Users: 102 x Intel Xeon E5-2680 v45K10K15K20K25KSE +/- 129.08, N = 323642.071. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:102 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 8411.62, N = 31481597.001. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 10:12 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 4259.56, N = 31277786.021. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:102 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 4611.65, N = 31517459.591. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 10:12 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 2849.14, N = 31331731.411. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:12 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 3475.92, N = 31303976.381. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:52 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 7788.01, N = 31480526.551. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:12 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 4068.22, N = 31365973.231. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:12 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 2137.82, N = 31361024.381. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:52 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 3714.54, N = 31609507.711. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:12 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 4640.23, N = 31524635.991. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 10002 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 7237.07, N = 31236129.421. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 10002 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 10619.90, N = 31571264.331. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 5002 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 10915.30, N = 71251818.181. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 10002 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 13555.82, N = 61407706.521. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 5002 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 8955.86, N = 31542038.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPUSH - Parallel Connections: 502 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 17610.01, N = 31390421.081. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 10002 x Intel Xeon E5-2680 v4400K800K1200K1600K2000KSE +/- 7775.96, N = 31783987.961. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 5002 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 22841.41, N = 121403511.361. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SADD - Parallel Connections: 502 x Intel Xeon E5-2680 v4400K800K1200K1600K2000KSE +/- 13772.36, N = 91719230.851. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 502 x Intel Xeon E5-2680 v4500K1000K1500K2000K2500KSE +/- 17716.01, N = 152203333.821. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 5002 x Intel Xeon E5-2680 v4400K800K1200K1600K2000KSE +/- 19833.99, N = 31769970.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 502 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 15716.01, N = 51574262.851. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 502 x Intel Xeon E5-2680 v4500K1000K1500K2000K2500KSE +/- 14437.10, N = 32246300.921. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:1002 x Intel Xeon E5-2680 v4700K1400K2100K2800K3500KSE +/- 26780.92, N = 33195806.541. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:102 x Intel Xeon E5-2680 v4700K1400K2100K2800K3500KSE +/- 2765.51, N = 33191700.601. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 5:12 x Intel Xeon E5-2680 v4200K400K600K800K1000KSE +/- 3234.70, N = 3809259.921. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:52 x Intel Xeon E5-2680 v4600K1200K1800K2400K3000KSE +/- 13235.65, N = 32992050.821. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:12 x Intel Xeon E5-2680 v4300K600K900K1200K1500KSE +/- 16311.34, N = 31268136.641. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, Lossless2 x Intel Xeon E5-2680 v4246810SE +/- 0.080, N = 38.3881. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, Lossless2 x Intel Xeon E5-2680 v43691215SE +/- 0.07, N = 311.931. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 62 x Intel Xeon E5-2680 v4246810SE +/- 0.043, N = 36.7311. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 22 x Intel Xeon E5-2680 v420406080100SE +/- 0.28, N = 379.421. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 02 x Intel Xeon E5-2680 v4306090120150SE +/- 0.77, N = 3150.331. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression Speed2 x Intel Xeon E5-2680 v4130260390520650SE +/- 0.40, N = 3592.41. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression Speed2 x Intel Xeon E5-2680 v4246810SE +/- 0.08, N = 36.191. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression Speed2 x Intel Xeon E5-2680 v4150300450600750SE +/- 0.21, N = 3686.61. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression Speed2 x Intel Xeon E5-2680 v4110220330440550SE +/- 1.63, N = 3495.71. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression Speed2 x Intel Xeon E5-2680 v4160320480640800SE +/- 2.39, N = 3730.51. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression Speed2 x Intel Xeon E5-2680 v4130260390520650SE +/- 3.88, N = 3585.41. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression Speed2 x Intel Xeon E5-2680 v4130260390520650SE +/- 0.74, N = 3578.71. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression Speed2 x Intel Xeon E5-2680 v43691215SE +/- 0.07, N = 311.31. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression Speed2 x Intel Xeon E5-2680 v4140280420560700SE +/- 0.86, N = 3626.51. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression Speed2 x Intel Xeon E5-2680 v4306090120150SE +/- 1.42, N = 3133.61. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression Speed2 x Intel Xeon E5-2680 v4140280420560700SE +/- 1.84, N = 3668.31. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression Speed2 x Intel Xeon E5-2680 v4110220330440550SE +/- 5.93, N = 3521.71. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression Speed2 x Intel Xeon E5-2680 v4150300450600750SE +/- 1.71, N = 3682.61. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression Speed2 x Intel Xeon E5-2680 v430060090012001500SE +/- 8.07, N = 31436.41. (CC) gcc options: -O3 -pthread -lz -llzma

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 21.7.2Time To Compile2 x Intel Xeon E5-2680 v4400800120016002000SE +/- 29.50, N = 32052.80

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To Compile2 x Intel Xeon E5-2680 v420406080100SE +/- 0.63, N = 376.411. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.2Blend File: Pabellon Barcelona - Compute: CPU-Only2 x Intel Xeon E5-2680 v450100150200250SE +/- 0.28, N = 3233.47

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.2Blend File: Barbershop - Compute: CPU-Only2 x Intel Xeon E5-2680 v4160320480640800SE +/- 0.76, N = 3736.44

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.2Blend File: Fishy Cat - Compute: CPU-Only2 x Intel Xeon E5-2680 v420406080100SE +/- 0.32, N = 3105.76

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.2Blend File: Classroom - Compute: CPU-Only2 x Intel Xeon E5-2680 v450100150200250SE +/- 2.43, N = 4215.31

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.2Blend File: Junkshop - Compute: CPU-Only2 x Intel Xeon E5-2680 v420406080100SE +/- 0.16, N = 399.22

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.2Blend File: BMW27 - Compute: CPU-Only2 x Intel Xeon E5-2680 v41632486480SE +/- 0.32, N = 372.25

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 11 Realtime - Input: Bosphorus 1080p2 x Intel Xeon E5-2680 v41632486480SE +/- 0.80, N = 1571.631. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p2 x Intel Xeon E5-2680 v41632486480SE +/- 0.76, N = 1571.481. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p2 x Intel Xeon E5-2680 v41530456075SE +/- 0.85, N = 1566.041. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p2 x Intel Xeon E5-2680 v4714212835SE +/- 0.28, N = 1531.701. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p2 x Intel Xeon E5-2680 v41530456075SE +/- 0.49, N = 1565.121. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p2 x Intel Xeon E5-2680 v43691215SE +/- 0.08, N = 311.711. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p2 x Intel Xeon E5-2680 v40.14630.29260.43890.58520.7315SE +/- 0.01, N = 30.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4K2 x Intel Xeon E5-2680 v4816243240SE +/- 0.13, N = 332.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K2 x Intel Xeon E5-2680 v4816243240SE +/- 0.34, N = 332.851. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K2 x Intel Xeon E5-2680 v4714212835SE +/- 0.31, N = 331.831. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K2 x Intel Xeon E5-2680 v4714212835SE +/- 0.19, N = 329.291. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K2 x Intel Xeon E5-2680 v43691215SE +/- 0.04, N = 312.001. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K2 x Intel Xeon E5-2680 v4714212835SE +/- 0.24, N = 1531.261. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K2 x Intel Xeon E5-2680 v41.2062.4123.6184.8246.03SE +/- 0.03, N = 35.361. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K2 x Intel Xeon E5-2680 v40.04730.09460.14190.18920.2365SE +/- 0.00, N = 30.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.72 x Intel Xeon E5-2680 v416K32K48K64K80KSE +/- 97.52, N = 372471.821. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on Windshield2 x Intel Xeon E5-2680 v450100150200250SE +/- 0.88, N = 3230.48

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop Test2 x Intel Xeon E5-2680 v420406080100SE +/- 0.19, N = 375.12

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper Beam2 x Intel Xeon E5-2680 v4306090120150SE +/- 1.00, N = 3143.24

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Direction2 x Intel Xeon E5-2680 v41122334455SE +/- 0.11, N = 346.691. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Direction2 x Intel Xeon E5-2680 v43691215SE +/- 0.10, N = 610.651. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3d2 x Intel Xeon E5-2680 v430060090012001500SE +/- 16.45, N = 91163.961. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbig2 x Intel Xeon E5-2680 v4510152025SE +/- 0.04, N = 321.821. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbig2 x Intel Xeon E5-2680 v4918273645SE +/- 0.05, N = 338.221. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.22 x Intel Xeon E5-2680 v4150M300M450M600M750MSE +/- 3355643.09, N = 37026472331. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

QuantLib

OpenBenchmarking.orgtasks/s, More Is BetterQuantLib 1.35-devSize: XXS2 x Intel Xeon E5-2680 v43691215SE +/- 0.01, N = 310.951. (CXX) g++ options: -O3 -fPIE -pie

OpenBenchmarking.orgtasks/s, More Is BetterQuantLib 1.35-devSize: S2 x Intel Xeon E5-2680 v43691215SE +/- 0.01, N = 310.441. (CXX) g++ options: -O3 -fPIE -pie

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: STMV with 1,066,628 Atoms2 x Intel Xeon E5-2680 v40.0740.1480.2220.2960.37SE +/- 0.00061, N = 30.32870

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: ATPase with 327,506 Atoms2 x Intel Xeon E5-2680 v40.24590.49180.73770.98361.2295SE +/- 0.00795, N = 31.09268

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup2 x Intel Xeon E5-2680 v4612182430SE +/- 0.29, N = 1525.61. (CC) gcc options: -fopenmp -O3 -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamcluster2 x Intel Xeon E5-2680 v448121620SE +/- 0.13, N = 1513.751. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solver2 x Intel Xeon E5-2680 v43691215SE +/- 0.11, N = 311.161. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Leukocyte2 x Intel Xeon E5-2680 v41530456075SE +/- 0.72, N = 366.191. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3D2 x Intel Xeon E5-2680 v4306090120150SE +/- 0.06, N = 3141.431. (CXX) g++ options: -O2 -lOpenCL

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM22 x Intel Xeon E5-2680 v4510152025SE +/- 0.02, N = 320.621. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM22 x Intel Xeon E5-2680 v4110220330440550SE +/- 0.62, N = 3515.581. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM12 x Intel Xeon E5-2680 v4510152025SE +/- 0.09, N = 319.511. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM12 x Intel Xeon E5-2680 v4110220330440550SE +/- 2.27, N = 3487.661. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

LeelaChessZero

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.31.1Backend: BLAS2 x Intel Xeon E5-2680 v41530456075SE +/- 1.15, N = 9651. (CXX) g++ options: -flto -pthread

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Times2 x Intel Xeon E5-2680 v42004006008001000SE +/- 1.20, N = 31161

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch Time2 x Intel Xeon E5-2680 v42004006008001000SE +/- 0.67, N = 31063

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To Solve2 x Intel Xeon E5-2680 v4816243240SE +/- 0.01, N = 336.861. (CXX) g++ options: -fopenmp -O2 -march=native

Hackbench

This is a benchmark of Hackbench, a test of the Linux kernel scheduler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHackbenchCount: 32 - Type: Process2 x Intel Xeon E5-2680 v41326395265SE +/- 0.07, N = 357.431. (CC) gcc options: -lpthread

Cython Benchmark

Cython provides a superset of Python that is geared to deliver C-like levels of performance. This test profile makes use of Cython's bundled benchmark tests and runs an N-Queens sample test as a simple benchmark to the system's Cython performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCython Benchmark 0.29.21Test: N-Queens2 x Intel Xeon E5-2680 v4816243240SE +/- 0.21, N = 332.49

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace Time2 x Intel Xeon E5-2680 v4612182430SE +/- 0.08, N = 323.761. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -R/usr/lib -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

C-Ray

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 2.0Total Time - 4K, 16 Rays Per Pixel2 x Intel Xeon E5-2680 v40.13910.27820.41730.55640.6955SE +/- 0.005, N = 30.6181. (CC) gcc options: -lpthread -lm

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.3.4Time To Compile2 x Intel Xeon E5-2680 v420406080100SE +/- 0.45, N = 378.38

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC) open-source compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compile2 x Intel Xeon E5-2680 v430060090012001500SE +/- 15.50, N = 51491.16

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth2 x Intel Xeon E5-2680 v413M26M39M52M65MSE +/- 347481.30, N = 362366573

7-Zip Compression

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 24.05Test: Decompression Rating2 x Intel Xeon E5-2680 v430K60K90K120K150KSE +/- 70.29, N = 31227601. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 24.05Test: Compression Rating2 x Intel Xeon E5-2680 v430K60K90K120K150KSE +/- 904.38, N = 31374221. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure Solver2 x Intel Xeon E5-2680 v47001400210028003500SE +/- 6.30, N = 33324.241. (CC) gcc options: -O3 -mavx2

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: Blowfish2 x Intel Xeon E5-2680 v47K14K21K28K35KSE +/- 66.69, N = 3306611. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -lm -lrt -lz -ldl -lcrypt

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IO2 x Intel Xeon E5-2680 v42K4K6K8K10KSE +/- 136.43, N = 1211385.6MIN: 10698.19 / MAX: 20243.26

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance of various popular real-world Java workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Tradebeans2 x Intel Xeon E5-2680 v44K8K12K16K20KSE +/- 185.70, N = 319157

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Jython2 x Intel Xeon E5-2680 v42K4K6K8K10KSE +/- 47.04, N = 38144

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMD2 x Intel Xeon E5-2680 v4306090120150SE +/- 0.28, N = 3133.481. (CXX) g++ options: -O2 -lOpenCL

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C2 x Intel Xeon E5-2680 v410K20K30K40K50KSE +/- 40.59, N = 346814.611. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.C2 x Intel Xeon E5-2680 v46001200180024003000SE +/- 10.98, N = 32736.221. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

CacheBench

This is a performance test of CacheBench, which is part of LLCbench. CacheBench is designed to test the memory and cache bandwidth performance Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read / Modify / Writer114K28K42K56K70KSE +/- 147.90, N = 365744.18MIN: 46193.67 / MAX: 78582.681. (CC) gcc options: -O3 -lrt

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Writer18K16K24K32K40KSE +/- 391.50, N = 536891.33MIN: 22723.54 / MAX: 48193.41. (CC) gcc options: -O3 -lrt

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Readr12K4K6K8K10KSE +/- 0.03, N = 38361.63MIN: 8349.48 / MAX: 8369.761. (CC) gcc options: -O3 -lrt

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: DNN - Deep Neural Network2 x Intel Xeon E5-2680 v411K22K33K44K55KSE +/- 2440.79, N = 15505671. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

Llamafile

Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./run-wizardcoder: line 2: ./wizardcoder-python-34b-v1.0.Q6_K.llamafile.86: No such file or directory

Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./run-mistral: line 2: ./mistral-7b-instruct-v0.2.Q5_K_M.llamafile.86: No such file or directory

Test: llava-v1.5-7b-q4 - Acceleration: CPU

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./run-llava: line 2: ./llava-v1.6-mistral-7b.Q8_0.llamafile.86: No such file or directory

Llama.cpp

Model: llama-2-70b-chat.Q5_0.gguf

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: main: error: unable to load model

Model: llama-2-13b.Q4_0.gguf

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: main: error: unable to load model

Model: llama-2-7b.Q4_0.gguf

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: main: error: unable to load model

Scikit-Learn

Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.

Benchmark: Sparse Random Projections / 100 Iterations

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Kernel PCA Solvers / Time vs. N Components

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Kernel PCA Solvers / Time vs. N Samples

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Hist Gradient Boosting Categorical Only

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Non-Negative Matrix Factorization

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Polynomial Kernel Approximation

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: 20 Newsgroups / Logistic Regression

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Hist Gradient Boosting Higgs Boson

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Singular Value Decomposition

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Hist Gradient Boosting Threading

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Isotonic / Perturbed Logarithm

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Hist Gradient Boosting Adult

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Covertype Dataset Benchmark

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Sample Without Replacement

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: RCV1 Logreg Convergencet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Isotonic / Pathological

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Parallel Pairwise

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Hist Gradient Boosting

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Incremental PCA

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Isotonic / Logistic

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: TSNE MNIST Dataset

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: LocalOutlierFactor

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Feature Expansions

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot OMP vs. LARS

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Hierarchical

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Text Vectorizers

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Fast KMeans

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Isolation Forest

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Lasso Path

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: SGDOneClassSVM

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: SGD Regression

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Neighbors

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: MNIST Dataset

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Plot Ward

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Sparsify

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Glmnet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Lasso

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: Tree

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: SAGA

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Benchmark: GLM

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: /lib/x86_64-linux-gnu/liblapack.so.3: undefined symbol: gotoblas

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

Benchmark: scikit_linearridgeregression

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: TypeError: load_all() missing 1 required positional argument: 'Loader'

Benchmark: scikit_svm

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: TypeError: load_all() missing 1 required positional argument: 'Loader'

Benchmark: scikit_qda

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: TypeError: load_all() missing 1 required positional argument: 'Loader'

Benchmark: scikit_ica

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: TypeError: load_all() missing 1 required positional argument: 'Loader'

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ModuleNotFoundError: No module named 'tensorflow'

ONNX Runtime

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: super-resolution-10 - Device: CPU - Executor: Standard

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: super-resolution-10 - Device: CPU - Executor: Parallel

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: fcn-resnet101-11 - Device: CPU - Executor: Standard

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: bertsquad-12 - Device: CPU - Executor: Standard

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: bertsquad-12 - Device: CPU - Executor: Parallel

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: T5 Encoder - Device: CPU - Executor: Standard

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: T5 Encoder - Device: CPU - Executor: Parallel

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ZFNet-512 - Device: CPU - Executor: Standard

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: ZFNet-512 - Device: CPU - Executor: Parallel

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: yolov4 - Device: CPU - Executor: Standard

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: yolov4 - Device: CPU - Executor: Parallel

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: GPT-2 - Device: CPU - Executor: Standard

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

Model: GPT-2 - Device: CPU - Executor: Parallel

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./onnx: line 2: ./onnxruntime/build/Linux/Release/onnxruntime_perf_test: No such file or directory

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.11/collections/__init__.py)

FP16: No - Mode: Inference - Network: VGG16 - Device: CPU

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.11/collections/__init__.py)

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV22 x Intel Xeon E5-2680 v48001600240032004000SE +/- 209.77, N = 336851. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV22 x Intel Xeon E5-2680 v48001600240032004000SE +/- 275.46, N = 337251. (CXX) g++ options: -O3 -lrt -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazeface2 x Intel Xeon E5-2680 v40.87751.7552.63253.514.3875SE +/- 0.19, N = 33.90MIN: 3.45 / MAX: 32.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v32 x Intel Xeon E5-2680 v4246810SE +/- 0.37, N = 37.68MIN: 6.95 / MAX: 135.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v22 x Intel Xeon E5-2680 v4246810SE +/- 0.24, N = 127.93MIN: 7.1 / MAX: 581.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: squeezenetv1.12 x Intel Xeon E5-2680 v41.31062.62123.93185.24246.553SE +/- 0.151, N = 95.825MIN: 4.27 / MAX: 16.911. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

Model: GoogleNet - Acceleration: CPU - Iterations: 1000

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found

Model: GoogleNet - Acceleration: CPU - Iterations: 200

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found

Model: GoogleNet - Acceleration: CPU - Iterations: 100

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found

Model: AlexNet - Acceleration: CPU - Iterations: 1000

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found

Model: AlexNet - Acceleration: CPU - Iterations: 200

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found

Model: AlexNet - Acceleration: CPU - Iterations: 100

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./caffe: 3: ./tools/caffe: not found

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ValueError: 'in' is not a valid parameter name

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

Device: GPU - Batch Size: 512 - Model: ResNet-50

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 512 - Model: GoogLeNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 256 - Model: ResNet-50

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 256 - Model: GoogLeNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 512 - Model: ResNet-50

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 512 - Model: GoogLeNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 256 - Model: ResNet-50

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 256 - Model: GoogLeNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 64 - Model: ResNet-50

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 64 - Model: GoogLeNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 32 - Model: ResNet-50

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 32 - Model: GoogLeNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 16 - Model: ResNet-50

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 16 - Model: GoogLeNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 64 - Model: ResNet-50

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 64 - Model: GoogLeNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 32 - Model: ResNet-50

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 32 - Model: GoogLeNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 16 - Model: ResNet-50

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 16 - Model: GoogLeNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 512 - Model: AlexNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 256 - Model: AlexNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 1 - Model: ResNet-50

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 1 - Model: GoogLeNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 512 - Model: AlexNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 256 - Model: AlexNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 1 - Model: ResNet-50

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 1 - Model: GoogLeNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 64 - Model: AlexNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 512 - Model: VGG-16

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 32 - Model: AlexNet

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Device: GPU - Batch Size: 256 - Model: VGG-16

2 x Intel Xeon E5-2680 v4: The test run did not produce a result. The test quit with a non-zero exit status.

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V22 x Intel Xeon E5-2680 v49K18K27K36K45KSE +/- 707.81, N = 1542979.1

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobile2 x Intel Xeon E5-2680 v49K18K27K36K45KSE +/- 722.59, N = 1541794.0

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10

2 x Intel Xeon E5-2680 v4: The test run did not produce a result.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 10:1

2 x Intel Xeon E5-2680 v4: The test run did not produce a result.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1

2 x Intel Xeon E5-2680 v4: The test run did not produce a result.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5

2 x Intel Xeon E5-2680 v4: The test run did not produce a result.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1

2 x Intel Xeon E5-2680 v4: The test run did not produce a result.

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 10002 x Intel Xeon E5-2680 v4400K800K1200K1600K2000KSE +/- 116725.42, N = 121785408.101. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: LPOP - Parallel Connections: 5002 x Intel Xeon E5-2680 v4400K800K1200K1600K2000KSE +/- 30966.88, N = 151955391.541. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

Benchmark: Sequential Fill

2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found

Benchmark: Random Delete

2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found

Benchmark: Seek Random

2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found

Benchmark: Random Read

2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found

Benchmark: Random Fill

2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found

Benchmark: Overwrite

2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found

Benchmark: Fill Sync

2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found

Benchmark: Hot Read

2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ./leveldb: 3: ./db_bench: not found

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

Connections: 20

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Connections: 1

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.9Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p2 x Intel Xeon E5-2680 v41632486480SE +/- 1.11, N = 1569.781. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Hashcat

Hashcat is an open-source, advanced password recovery tool supporting GPU acceleration with OpenCL, NVIDIA CUDA, and Radeon ROCm. Learn more via the OpenBenchmarking.org test page.

Benchmark: MD5

2 x Intel Xeon E5-2680 v4 - mgag200drmfb - Dell: The test quit with a non-zero exit status.

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

Input: motorBike

2 x Intel Xeon E5-2680 v4: The test run did not produce a result.

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

Model: Chrysler Neon 1M

2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ** ERROR: INPUT FILE /NEON1M11_0001.rad NOT FOUND

Model: Ford Taurus 10M

2 x Intel Xeon E5-2680 v4: The test run did not produce a result.

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

Input: drivaerFastback, Small Mesh Size

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: [0] --> FOAM FATAL ERROR:

LeelaChessZero

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.31.1Backend: Eigen2 x Intel Xeon E5-2680 v41224364860SE +/- 1.37, N = 9541. (CXX) g++ options: -flto -pthread

Radiance Benchmark

This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.

Test: SMP Parallel

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: make: time: No such file or directory

Test: Serial

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: make: time: No such file or directory

oneDNN

Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: driver: ERROR: unknown option: conv '--cfg=f32'

Harness: Convolution Batch conv_alexnet - Data Type: f32

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: driver: ERROR: unknown option: conv '--cfg=f32'

Harness: Deconvolution Batch deconv_1d - Data Type: f32

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: driver: ERROR: unknown option: deconv '--cfg=f32'

Rust Mandelbrot

This test profile is of the combined time for the serial and parallel Mandelbrot sets written in Rustlang via willi-kappler/mandel-rust. Learn more via the OpenBenchmarking.org test page.

Time To Complete Serial/Parallel Mandelbrot

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: ./rust-mandel: 3: ./target/release/mandel: not found

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

Time To Compile

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: llvm-16.0.0.src/tools/llvm-readobj/ELFDumper.cpp:7556:1: fatal error: error writing to /tmp/ccjmapwn.s: No space left on device

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

Time To Compile

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status. E: kernel/rcu/tree.c:5174: fatal error: error writing to /tmp/cc31CL9j.s: Success

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 17Total Time2 x Intel Xeon E5-2680 v49M18M27M36M45MSE +/- 1759062.68, N = 6432610941. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

H.264 Video Encoding

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

1080p 8-bit YUV To VP9 Video Encode

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

1080p 8-bit YUV To HEVC Video Encode

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

SVT-AV1

1080p 8-bit YUV To AV1 Video Encode

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

Test: Apache Spark PageRank

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

Test: Scala Dotty

2 x Intel Xeon E5-2680 v4: The test quit with a non-zero exit status.

NAMD

ATPase Simulation - 327,506 Atoms

2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: FATAL ERROR: No simulation config file specified on command line.

CP2K Molecular Dynamics

Fayalite-FIST Data

2 x Intel Xeon E5-2680 v4: The test run did not produce a result. E: ERROR: At least one command line argument must be specified

353 Results Shown

Whisper.cpp:
  ggml-medium.en - 2016 State of the Union
  ggml-small.en - 2016 State of the Union
  ggml-base.en - 2016 State of the Union
Numenta Anomaly Benchmark:
  Contextual Anomaly Detector OSE
  Bayesian Changepoint
  Earthgecko Skyline
  Windowed Gaussian
  Relative Entropy
  KNN CAD
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Handwritten English Recognition FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
  Person Re-Identification Retail FP16 - CPU:
    ms
    FPS
  Handwritten English Recognition FP16 - CPU:
    ms
    FPS
  Noise Suppression Poconet-Like FP16 - CPU:
    ms
    FPS
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Road Segmentation ADAS FP16-INT8 - CPU:
    ms
    FPS
  Face Detection Retail FP16-INT8 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Road Segmentation ADAS FP16 - CPU:
    ms
    FPS
  Face Detection Retail FP16 - CPU:
    ms
    FPS
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16 - CPU:
    ms
    FPS
  Person Detection FP32 - CPU:
    ms
    FPS
  Person Detection FP16 - CPU:
    ms
    FPS
  Face Detection FP16 - CPU:
    ms
    FPS
XNNPACK:
  QS8MobileNetV2
  FP16MobileNetV3Small
  FP16MobileNetV3Large
  FP16MobileNetV1
  FP32MobileNetV3Small
  FP32MobileNetV3Large
  FP32MobileNetV1
TNN:
  CPU - SqueezeNet v1.1
  CPU - SqueezeNet v2
  CPU - MobileNet v2
  CPU - DenseNet
NCNN:
  Vulkan GPU - FastestDet
  Vulkan GPU - vision_transformer
  Vulkan GPU - regnety_400m
  Vulkan GPU - squeezenet_ssd
  Vulkan GPU - yolov4-tiny
  Vulkan GPUv2-yolov3v2-yolov3 - mobilenetv2-yolov3
  Vulkan GPU - resnet50
  Vulkan GPU - alexnet
  Vulkan GPU - resnet18
  Vulkan GPU - vgg16
  Vulkan GPU - googlenet
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - mnasnet
  Vulkan GPU - shufflenet-v2
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU - mobilenet
  CPU - FastestDet
  CPU - vision_transformer
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPUv2-yolov3v2-yolov3 - mobilenetv2-yolov3
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU - mobilenet
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  SqueezeNetV1.0
  resnet-v2-50
  mobilenetV3
  nasnet
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  Llama2 Chat 7b Quantized - Synchronous Single-Stream:
    ms/batch
    items/sec
  Llama2 Chat 7b Quantized - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Synchronous Single-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
TensorFlow:
  GPU - 16 - AlexNet
  CPU - 64 - AlexNet
  CPU - 512 - VGG-16
  CPU - 32 - AlexNet
  CPU - 256 - VGG-16
  CPU - 16 - AlexNet
  GPU - 64 - VGG-16
  GPU - 32 - VGG-16
  GPU - 16 - VGG-16
  GPU - 1 - AlexNet
  CPU - 64 - VGG-16
  CPU - 32 - VGG-16
  CPU - 16 - VGG-16
  CPU - 1 - AlexNet
  GPU - 1 - VGG-16
  CPU - 1 - VGG-16
PyTorch:
  CPU - 512 - Efficientnet_v2_l
  CPU - 256 - Efficientnet_v2_l
  CPU - 64 - Efficientnet_v2_l
  CPU - 32 - Efficientnet_v2_l
  CPU - 16 - Efficientnet_v2_l
  CPU - 1 - Efficientnet_v2_l
  CPU - 512 - ResNet-152
  CPU - 256 - ResNet-152
  CPU - 64 - ResNet-152
  CPU - 512 - ResNet-50
  CPU - 32 - ResNet-152
  CPU - 256 - ResNet-50
  CPU - 16 - ResNet-152
  CPU - 64 - ResNet-50
  CPU - 32 - ResNet-50
  CPU - 16 - ResNet-50
  CPU - 1 - ResNet-152
  CPU - 1 - ResNet-50
TensorFlow Lite:
  Mobilenet Quant
  Mobilenet Float
  Inception V4
  SqueezeNet
RNNoise
R Benchmark
DeepSpeech
Numpy Benchmark
oneDNN:
  Recurrent Neural Network Inference - CPU
  Recurrent Neural Network Training - CPU
  Deconvolution Batch shapes_3d - CPU
  Deconvolution Batch shapes_1d - CPU
  Convolution Batch Shapes Auto - CPU
  IP Shapes 3D - CPU
  IP Shapes 1D - CPU
Cpuminer-Opt:
  Triple SHA-256, Onecoin
  Quad SHA-256, Pyrite
  LBC, LBRY Credits
  Myriad-Groestl
  Skeincoin
  Garlicoin
  Blake-2 S
  Ringcoin
  Deepcoin
  scrypt
  x20r
  Magi
Timed LLVM Compilation:
  Unix Makefiles
  Ninja
Xmrig:
  CryptoNight-Femto UPX2 - 1M
  CryptoNight-Heavy - 1M
  GhostRider - 1M
  Wownero - 1M
  Monero - 1M
  KawPow - 1M
InfluxDB:
  1024 - 10000 - 2,5000,1 - 10000
  64 - 10000 - 2,5000,1 - 10000
  4 - 10000 - 2,5000,1 - 10000
Apache Siege:
  1000
  500
  200
  100
  50
  10
Redis 7.0.12 + memtier_benchmark:
  Redis - 100 - 1:10
  Redis - 100 - 10:1
  Redis - 50 - 1:10
  Redis - 50 - 10:1
  Redis - 100 - 5:1
  Redis - 100 - 1:5
  Redis - 100 - 1:1
  Redis - 50 - 5:1
  Redis - 50 - 1:5
  Redis - 50 - 1:1
Redis:
  LPUSH - 1000
  SADD - 1000
  LPUSH - 500
  SET - 1000
  SADD - 500
  LPUSH - 50
  GET - 1000
  SET - 500
  SADD - 50
  LPOP - 50
  GET - 500
  SET - 50
  GET - 50
Memcached:
  1:100
  1:10
  5:1
  1:5
  1:1
libavif avifenc:
  10, Lossless
  6, Lossless
  6
  2
  0
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
  8, Long Mode - Decompression Speed
  8, Long Mode - Compression Speed
  3, Long Mode - Decompression Speed
  3, Long Mode - Compression Speed
  19 - Decompression Speed
  19 - Compression Speed
  12 - Decompression Speed
  12 - Compression Speed
  8 - Decompression Speed
  8 - Compression Speed
  3 - Decompression Speed
  3 - Compression Speed
Timed Node.js Compilation
Timed Wasmer Compilation
Blender:
  Pabellon Barcelona - CPU-Only
  Barbershop - CPU-Only
  Fishy Cat - CPU-Only
  Classroom - CPU-Only
  Junkshop - CPU-Only
  BMW27 - CPU-Only
AOM AV1:
  Speed 11 Realtime - Bosphorus 1080p
  Speed 10 Realtime - Bosphorus 1080p
  Speed 8 Realtime - Bosphorus 1080p
  Speed 6 Two-Pass - Bosphorus 1080p
  Speed 6 Realtime - Bosphorus 1080p
  Speed 4 Two-Pass - Bosphorus 1080p
  Speed 0 Two-Pass - Bosphorus 1080p
  Speed 11 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 4K
  Speed 9 Realtime - Bosphorus 4K
  Speed 8 Realtime - Bosphorus 4K
  Speed 6 Two-Pass - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 4K
  Speed 4 Two-Pass - Bosphorus 4K
  Speed 0 Two-Pass - Bosphorus 4K
Aircrack-ng
OpenRadioss:
  Bird Strike on Windshield
  Cell Phone Drop Test
  Bumper Beam
Xcompact3d Incompact3d:
  input.i3d 193 Cells Per Direction
  input.i3d 129 Cells Per Direction
  X3D-benchmarking input.i3d
Pennant:
  leblancbig
  sedovbig
Algebraic Multi-Grid Benchmark
QuantLib:
  XXS
  S
NAMD:
  STMV with 1,066,628 Atoms
  ATPase with 327,506 Atoms
CLOMP
Rodinia:
  OpenMP Streamcluster
  OpenMP CFD Solver
  OpenMP Leukocyte
  OpenMP HotSpot3D
miniBUDE:
  OpenMP - BM2:
    Billion Interactions/s
    GFInst/s
  OpenMP - BM1:
    Billion Interactions/s
    GFInst/s
LeelaChessZero
PyBench
ctx_clock
m-queens
Hackbench
Cython Benchmark
POV-Ray
C-Ray
Timed PHP Compilation
Timed GCC Compilation
asmFish
7-Zip Compression:
  Decompression Rating
  Compression Rating
Himeno Benchmark
John The Ripper
Renaissance
DaCapo Benchmark:
  Tradebeans
  Jython
Rodinia
NAS Parallel Benchmarks:
  LU.C
  EP.C
CacheBench:
  Read / Modify / Write
  Write
  Read
OpenCV
XNNPACK:
  FP16MobileNetV2
  FP32MobileNetV2
NCNN:
  Vulkan GPU - blazeface
  Vulkan GPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
Mobile Neural Network
TensorFlow Lite:
  Inception ResNet V2
  NASNet Mobile
Redis:
  LPOP - 1000
  LPOP - 500
AOM AV1
LeelaChessZero
Stockfish