kdlkf

AMD EPYC 8534P 64-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2403160-NE-KDLKF039836&grs.

kdlkfProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionabcAMD EPYC 8534P 64-Core @ 2.30GHz (64 Cores / 128 Threads)AMD Cinnabar (RCB1009C BIOS)AMD Device 14a46 x 32GB DRAM-4800MT/s Samsung M321R4GA0BB0-CQKMG3201GB Micron_7450_MTFDKCB3T2TFSASPEED2 x Broadcom NetXtreme BCM5720 PCIeUbuntu 23.106.5.0-15-generic (x86_64)GNOME ShellX Server 1.21.1.7GCC 13.2.0ext41920x1200OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-FTCNCZ/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-FTCNCZ/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212 Python Details- Python 3.11.5Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

kdlkfsrsran: PDSCH Processor Benchmark, Throughput Threadstockfish: Chess Benchmarkjpegxl: PNG - 90jpegxl: PNG - 80jpegxl: JPEG - 80compress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionjpegxl: JPEG - 90svt-av1: Preset 8 - Bosphorus 4Kdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamsvt-av1: Preset 13 - Bosphorus 1080psrsran: PDSCH Processor Benchmark, Throughput Totaljpegxl-decode: Allsvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080popenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUsvt-av1: Preset 4 - Bosphorus 1080pdraco: Church Facadedeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamsvt-av1: Preset 13 - Bosphorus 4Kopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUonednn: IP Shapes 1D - CPUv-ray: CPUsvt-av1: Preset 4 - Bosphorus 4Kdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamonednn: Recurrent Neural Network Training - CPUopenvino: Vehicle Detection FP16-INT8 - CPUjpegxl: JPEG - 100openvino: Vehicle Detection FP16-INT8 - CPUonednn: Convolution Batch Shapes Auto - CPUdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdraco: Liondeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamprimesieve: 1e12deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUonednn: Recurrent Neural Network Inference - CPUonednn: Deconvolution Batch shapes_1d - CPUdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUonednn: Deconvolution Batch shapes_3d - CPUdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamjpegxl: PNG - 100deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamopenvino: Age Gender Recognition Retail 0013 FP16 - CPUdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamprimesieve: 1e13deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streambuild-linux-kernel: defconfigopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Person Re-Identification Retail FP16 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamopenvino: Noise Suppression Poconet-Like FP16 - CPUonednn: IP Shapes 3D - CPUbuild-linux-kernel: allmodconfigopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamopenvino: Person Detection FP16 - CPUjpegxl-decode: 1deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamopenvino: Weld Porosity Detection FP16-INT8 - CPUdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamopenvino: Person Detection FP16 - CPUopenvino: Face Detection FP16 - CPUsvt-av1: Preset 12 - Bosphorus 4Kopenvino: Machine Translation EN To DE FP16 - CPUdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamopenvino: Face Detection Retail FP16-INT8 - CPUdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Vehicle Detection FP16 - CPUdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamopenvino: Face Detection Retail FP16 - CPUencode-wavpack: WAV To WavPackopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamopenvino: Weld Porosity Detection FP16 - CPUdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamopenvino: Face Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamopenvino: Face Detection FP16-INT8 - CPUdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUabc545.79897391436.93942.81440.4312.01655839.59171.236823.77861.2101577.69721012.2551.294507.358142.846556.5157.4318.7548195202.10444.9422167.5610.732974.20.7963929200872.85451541.4341.499727.6824746.67310.5930.1943015.551.18235136.97317.292764.488315.490663013769.71968.46713.556213.944386126.49455.0168.73308149.245530.381051.91.4525535.761429.416685.233846.62468063.1435.61742.899101.5303509.3081154.50356.46561616.793817.873927.6314.465922.14744.29539.558.131442.787216.530.990182391.2424.656.399773.2469.497914.3821881.713147.2149.6595.342311.26186.9693217.1229.64165.27896.562.35226.535.3331331.171332.49187.28386930.156.06723.965678.1244.111622.645477.04821.55882.33661077.232966.7966.993966.9654216.99147.3329.751633.601829.716433.6419564.47476.82850.580.77628.911076382841.74840.41143.8832.17037738.18168.579820.5971.2147582.50921055.5546.955499.827142.252546.6758.4618.768069203.92954.8985166.21110.842945.110.791773920997.0052.8315154041.539786.0632752.48110.6930.0262988.491.18126137.78347.249664.416615.508262583790.79848.42163.554212.473386415.41457.8258.78255150.074130.31054.531.4594535.593629.496686.67846.529868303.4935.736642.76101.3076508.6027154.0396.48461621.113825.643916.92315.286522.204244.40939.458.151439.286316.510.992488391.9344.6156.459793.2669.637314.3538883.0605147.4849.6825.341211.24186.9956216.7529.59165.00796.3562.29816.525.3354331.651330.83187.19986920.516.06623.995685.0644.072422.6651476.530821.54883.1071076.612968.3266.985567.0123217.07147.2629.74233.612429.714933.6426564.47476.81420.580.77630.99729251837.52545.02440.6572.0576738.06669.504796.35091.2517565.03720479.7536.436509.752145.058545.9358.5419.0678162201.04064.9685165.24410.882934.130.802263910076.9252.82251525.9141.919824.214745.40910.629.9233012.811.17292138.05577.235664.00815.606162553797.24288.40683.531213.19685841.12457.7138.75546149.614330.461049.111.4543335.683729.554683.767846.723568152.8835.617442.843101.6256507.7182154.40936.46921621.253828.33921.98314.476322.18344.39239.458.141440.702516.490.9919391.0274.656.519779.9369.619414.3578883.404147.4249.755.350811.25186.6661216.8429.6165.21596.3862.39476.535.3276331.631332.74187.46366927.96.05923.965681.4844.05822.671476.823121.53882.86751076.352969.1367.035966.9884216.93147.3529.758433.595229.723533.6331564.32476.87990.580.77OpenBenchmarking.org

srsRAN Project

Test: PDSCH Processor Benchmark, Throughput Thread

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Threadabc140280420560700545.7628.9630.91. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -mavx512f -mavx512cd -mavx512bw -mavx512dq -O3 -fno-trapping-math -fno-math-errno -ldl

Stockfish

Chess Benchmark

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 16.1Chess Benchmarkabc20M40M60M80M100M98973914110763828972925181. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver

JPEG-XL libjxl

Input: PNG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 90abc102030405036.9441.7537.531. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

JPEG-XL libjxl

Input: PNG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 80abc102030405042.8140.4145.021. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

JPEG-XL libjxl

Input: JPEG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 80abc102030405040.4343.8840.661. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

Parallel BZIP2 Compression

FreeBSD-13.0-RELEASE-amd64-memstick.img Compression

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionabc0.48830.97661.46491.95322.44152.0165582.1703772.0576701. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

JPEG-XL libjxl

Input: JPEG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 90abc91827364539.5938.1838.071. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 4Kabc163248648071.2468.5869.501. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabc2004006008001000823.78820.60796.35

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabc0.28160.56320.84481.12641.4081.21011.21471.2517

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 1080pabc130260390520650577.70582.51565.041. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

srsRAN Project

Test: PDSCH Processor Benchmark, Throughput Total

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Totalabc5K10K15K20K25K21012.221055.520479.71. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -mavx512f -mavx512cd -mavx512bw -mavx512dq -O3 -fno-trapping-math -fno-math-errno -ldl

JPEG-XL Decoding libjxl

CPU Threads: All

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: Allabc120240360480600551.29546.96536.44

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pabc110220330440550507.36499.83509.751. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pabc306090120150142.85142.25145.061. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUabc120240360480600556.51546.67545.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUabc132639526557.4358.4658.54MIN: 33.35 / MAX: 86.63MIN: 17.61 / MAX: 117.33MIN: 27.13 / MAX: 87.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pabc51015202518.7518.7619.071. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Google Draco

Model: Church Facade

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadeabc2K4K6K8K10K8195806981621. (CXX) g++ options: -O3

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabc4080120160200202.10203.93201.04

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabc1.11792.23583.35374.47165.58954.94224.89854.9685

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 4Kabc4080120160200167.56166.21165.241. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUabc369121510.7310.8410.88MIN: 6.71 / MAX: 31.79MIN: 6.69 / MAX: 60.8MIN: 6.49 / MAX: 19.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUabc60012001800240030002974.202945.112934.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

Harness: IP Shapes 1D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 1D - Engine: CPUabc0.18050.3610.54150.7220.90250.7963920.7917730.802263MIN: 0.71MIN: 0.71MIN: 0.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Chaos Group V-RAY

Mode: CPU

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 6.0Mode: CPUabc20K40K60K80K100K920089209991007

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 4Kabc2468107.0007.0056.9251. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamabc0.64231.28461.92692.56923.21152.85452.83152.8225

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUabc300600900120015001541.431540.001525.911. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUabc102030405041.4941.5341.91MIN: 30.95 / MAX: 75.67MIN: 29.25 / MAX: 66.79MIN: 28.76 / MAX: 70.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamabc2K4K6K8K10K9727.689786.069824.21

oneDNN

Harness: Recurrent Neural Network Training - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Training - Engine: CPUabc160320480640800746.67752.48745.41MIN: 725.12MIN: 727.5MIN: 723.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUabc369121510.5910.6910.60MIN: 5.47 / MAX: 32.18MIN: 5.5 / MAX: 63.05MIN: 5.23 / MAX: 33.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

JPEG-XL libjxl

Input: JPEG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 100abc71421283530.1930.0329.921. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUabc60012001800240030003015.552988.493012.811. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Convolution Batch Shapes Auto - Engine: CPUabc0.2660.5320.7981.0641.331.182351.181261.17292MIN: 1.08MIN: 1.07MIN: 1.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc306090120150136.97137.78138.06

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc2468107.29277.24967.2356

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabc142842567064.4964.4264.01

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabc4812162015.4915.5115.61

Google Draco

Model: Lion

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Lionabc140028004200560070006301625862551. (CXX) g++ options: -O3

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc80016002400320040003769.723790.803797.24

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc2468108.46718.42168.4068

Primesieve

Length: 1e12

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e12abc0.80011.60022.40033.20044.00053.5563.5543.5311. (CXX) g++ options: -O3

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc50100150200250213.94212.47213.20

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUabc20K40K60K80K100K86126.4986415.4185841.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Inference - Engine: CPUabc100200300400500455.02457.83457.71MIN: 443.47MIN: 446.01MIN: 446.221. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_1d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_1d - Engine: CPUabc2468108.733088.782558.75546MIN: 8.15MIN: 8.15MIN: 8.131. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc306090120150149.25150.07149.61

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUabc71421283530.3830.3030.46MIN: 20.19 / MAX: 87.53MIN: 15.64 / MAX: 53.03MIN: 16.49 / MAX: 89.41. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUabc20040060080010001051.901054.531049.111. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_3d - Engine: CPUabc0.32840.65680.98521.31361.6421.452551.459451.45433MIN: 1.34MIN: 1.35MIN: 1.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc81624324035.7635.5935.68

JPEG-XL libjxl

Input: PNG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 100abc71421283529.4229.5029.551. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc150300450600750685.23686.68683.77

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc112233445546.6246.5346.72

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUabc15K30K45K60K75K68063.1468303.4968152.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc81624324035.6235.7435.62

Primesieve

Length: 1e13

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e13abc102030405042.9042.7642.841. (CXX) g++ options: -O3

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc20406080100101.53101.31101.63

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc110220330440550509.31508.60507.72

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabc306090120150154.50154.04154.41

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabc2468106.46566.48466.4692

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUabc300600900120015001616.791621.111621.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Noise Suppression Poconet-Like FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUabc80016002400320040003817.873825.643828.301. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Re-Identification Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUabc80016002400320040003927.603916.923921.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc70140210280350314.47315.29314.48

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc51015202522.1522.2022.18

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: defconfigabc102030405044.3044.4144.39

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUabc91827364539.5539.4539.45MIN: 26.86 / MAX: 115.64MIN: 30.71 / MAX: 60.16MIN: 29.74 / MAX: 104.111. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Re-Identification Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUabc2468108.138.158.14MIN: 5.06 / MAX: 24.84MIN: 5.1 / MAX: 25.03MIN: 5.1 / MAX: 61.681. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabc300600900120015001442.791439.291440.70

OpenVINO

Model: Noise Suppression Poconet-Like FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUabc4812162016.5316.5116.49MIN: 11.99 / MAX: 30.98MIN: 13.03 / MAX: 73.51MIN: 12.9 / MAX: 50.11. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

Harness: IP Shapes 3D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 3D - Engine: CPUabc0.22330.44660.66990.89321.11650.9901820.9924880.991900MIN: 0.96MIN: 0.96MIN: 0.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: allmodconfigabc90180270360450391.24391.93391.03

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUabc1.03732.07463.11194.14925.18654.604.614.60MIN: 2.66 / MAX: 59.62MIN: 2.69 / MAX: 23.07MIN: 2.64 / MAX: 56.831. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUabc132639526556.3956.4556.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUabc2K4K6K8K10K9773.249793.269779.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabc153045607569.5069.6469.62

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabc4812162014.3814.3514.36

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc2004006008001000881.71883.06883.40

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUabc306090120150147.21147.48147.42MIN: 67.86 / MAX: 215.32MIN: 61.46 / MAX: 214.49MIN: 67.79 / MAX: 216.591. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

JPEG-XL Decoding libjxl

CPU Threads: 1

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: 1abc112233445549.6649.6849.75

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc1.20392.40783.61174.81566.01955.34235.34125.3508

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUabc369121511.2611.2411.25MIN: 6.23 / MAX: 66.17MIN: 6.03 / MAX: 26.13MIN: 5.91 / MAX: 28.571. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc4080120160200186.97187.00186.67

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUabc50100150200250217.12216.75216.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUabc71421283529.6429.5929.601. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 4Kabc4080120160200165.28165.01165.221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUabc2040608010096.5096.3596.38MIN: 59.72 / MAX: 146.05MIN: 51.15 / MAX: 138.53MIN: 51.17 / MAX: 136.871. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc142842567062.3562.3062.39

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUabc2468106.536.526.53MIN: 3.83 / MAX: 25.46MIN: 3.91 / MAX: 24.76MIN: 4.04 / MAX: 24.491. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabc1.20052.4013.60154.8026.00255.33315.33545.3276

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUabc70140210280350331.17331.65331.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUabc300600900120015001332.491330.831332.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabc4080120160200187.28187.20187.46

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUabc150030004500600075006930.156920.516927.901. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

WavPack Audio Encoding

WAV To WavPack

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.7WAV To WavPackabc2468106.0676.0666.059

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUabc61218243023.9623.9923.96MIN: 14.33 / MAX: 58.92MIN: 11.1 / MAX: 55.51MIN: 11.23 / MAX: 87.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUabc120024003600480060005678.125685.065681.481. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc102030405044.1144.0744.06

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc51015202522.6522.6722.67

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc100200300400500477.05476.53476.82

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUabc51015202521.5521.5421.53MIN: 11.26 / MAX: 37.6MIN: 11.09 / MAX: 73.05MIN: 11.27 / MAX: 38.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc2004006008001000882.34883.11882.87

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUabc20040060080010001077.231076.611076.35MIN: 1028.06 / MAX: 1100.35MIN: 1012.89 / MAX: 1114.3MIN: 1013.12 / MAX: 1103.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUabc60012001800240030002966.792968.322969.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabc153045607566.9966.9967.04

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc153045607566.9767.0166.99

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUabc50100150200250216.99217.07216.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUabc306090120150147.33147.26147.35MIN: 64.91 / MAX: 218.97MIN: 62.64 / MAX: 223.02MIN: 69.53 / MAX: 218.41. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc71421283529.7529.7429.76

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc81624324033.6033.6133.60

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc71421283529.7229.7129.72

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc81624324033.6433.6433.63

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUabc120240360480600564.47564.47564.32MIN: 481.93 / MAX: 614.86MIN: 525.34 / MAX: 571.96MIN: 525.83 / MAX: 608.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabc100200300400500476.83476.81476.88

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUabc0.13050.2610.39150.5220.65250.580.580.58MIN: 0.27 / MAX: 50.28MIN: 0.26 / MAX: 51.28MIN: 0.27 / MAX: 21.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUabc0.17330.34660.51990.69320.86650.770.770.77MIN: 0.36 / MAX: 50.76MIN: 0.36 / MAX: 11.94MIN: 0.36 / MAX: 50.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl


Phoronix Test Suite v10.8.5