7763 2204

AMD EPYC 7763 64-Core testing with a AMD DAYTONA_X (RYM1009B BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2308059-NE-77632204529
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

CPU Massive 3 Tests
Creator Workloads 3 Tests
Database Test Suite 3 Tests
HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests
Multi-Core 4 Tests
NVIDIA GPU Compute 2 Tests
Server 3 Tests
Server CPU Tests 2 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
August 04 2023
  6 Hours, 9 Minutes
b
August 04 2023
  4 Hours, 37 Minutes
c
August 05 2023
  4 Hours, 37 Minutes
Invert Hiding All Results Option
  5 Hours, 7 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


7763 2204OpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads)AMD DAYTONA_X (RYM1009B BIOS)AMD Starship/Matisse256GB800GB INTEL SSDPF21Q800GBASPEEDVE2282 x Mellanox MT27710Ubuntu 22.046.2.0-phx (x86_64)GNOME Shell 42.5X Server 1.21.1.31.3.224GCC 11.3.0 + LLVM 14.0.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen Resolution7763 2204 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa001173 - OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)- Python 3.10.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcResult OverviewPhoronix Test Suite100%101%103%104%105%NCNNsrsRAN ProjectApache CassandraBRL-CADVVenCApache IoTDBBlenderNeural Magic DeepSparseTimed GCC Compilation

7763 2204ncnn: CPU - regnety_400mapache-iotdb: 500 - 1 - 500ncnn: CPU - shufflenet-v2apache-iotdb: 200 - 1 - 200apache-iotdb: 500 - 1 - 500apache-iotdb: 500 - 1 - 200ncnn: CPU - blazefaceapache-iotdb: 500 - 1 - 200ncnn: CPU - FastestDetapache-iotdb: 200 - 1 - 200ncnn: CPU-v3-v3 - mobilenet-v3apache-iotdb: 100 - 1 - 200srsran: Downlink Processor Benchmarkapache-iotdb: 500 - 100 - 200apache-iotdb: 500 - 100 - 500apache-iotdb: 100 - 100 - 200apache-iotdb: 100 - 1 - 500apache-iotdb: 500 - 100 - 200apache-iotdb: 500 - 100 - 500apache-iotdb: 100 - 100 - 200ncnn: CPU - mnasnetapache-iotdb: 100 - 100 - 500apache-iotdb: 200 - 100 - 500apache-iotdb: 100 - 100 - 500apache-iotdb: 100 - 1 - 200apache-iotdb: 200 - 100 - 200apache-iotdb: 200 - 1 - 500ncnn: CPU - squeezenet_ssdapache-iotdb: 200 - 100 - 500apache-iotdb: 100 - 1 - 500ncnn: CPU-v2-v2 - mobilenet-v2apache-iotdb: 200 - 1 - 500ncnn: CPU - efficientnet-b0apache-iotdb: 200 - 100 - 200vvenc: Bosphorus 4K - Fasterdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamncnn: CPU - yolov4-tinycassandra: Writessrsran: PUSCH Processor Benchmark, Throughput Threadncnn: CPU - resnet50deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamncnn: CPU - vgg16deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamncnn: CPU - resnet18ncnn: CPU - googlenetncnn: CPU - mobilenetblender: BMW27 - CPU-Onlydeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamncnn: CPU - vision_transformerdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streambrl-cad: VGR Performance Metricdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamsrsran: PUSCH Processor Benchmark, Throughput Totalblender: Pabellon Barcelona - CPU-Onlyblender: Classroom - CPU-Onlydeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamvvenc: Bosphorus 1080p - Fasterdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamvvenc: Bosphorus 4K - Fastdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamvvenc: Bosphorus 1080p - Fastdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamncnn: CPU - alexnetdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamblender: Fishy Cat - CPU-Onlydeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamblender: Barbershop - CPU-Onlydeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streambuild-gcc: Time To Compiledeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamcouchdb: 500 - 3000 - 30couchdb: 500 - 1000 - 30couchdb: 300 - 3000 - 30couchdb: 300 - 1000 - 30couchdb: 100 - 3000 - 30couchdb: 100 - 1000 - 30abc35.2427.19.0915.241636128.7313.543.971182440.6210.25898967.087.0017.45657.735.0556935634.5536.0434.3651341708.8581.8139287432.926.0981.2109.3851316464.44644019.7235.0933.514.1742048733.221038515.626.351232509.199.9846437377.6710.646489.770965.275020.66236650211.115.491.3784723.714923.84173.94625.7478.5014.6214.1127.2797.356110.264548.7946.720473438610.577094.5097159.84926.252540.904824.4422159.96606.248118.585853.78459682.184.5568.8050.125819.946929.35239.421925.356143.0725120.36458.305611.537886.6104223.47155.9918.36653814.519453.4831597.967937.614416.083225.3104681.275649.813220.072037.61095.231105.379133.7028.9150141.70368.3423119.8041841.4779165.9980253.49467.9848192.354668.3188840.5416227.4188140.3979468.1383326.31231020.133574.955068.285255.581728.631034.910497.87112390.933339.967572.125169.505346.085101.57827.54267.9313.61686943.1611.633.431365831.59.04978176.766.5517.28658.136.159505306.5536.8532.9250045888.9879.8338401769.175.982.26110.8850507747.12648308.2733.8433.8614.1141987111.391069145.796.261226219.889.7847245476.7810.815486.172865.735920.51238161208.215.341.3623732.110123.64172.75055.78638.4214.5313.9727.597.612110.238048.4946.573972987610.629994.0445159.74816.256640.699324.5655159.92386.249518.614453.70239718.684.3568.5049.922620.027929.3939.563125.2659143.2058120.17628.318211.566686.3905223.36705.9938.34413824.304553.5531596.809137.531516.090225.4592679.824049.914420.031537.53525.221104.037733.7628.9451141.61858.3368119.8818840.2573166.2231253.77468.1087192.164568.2716840.9493227.6423140.2607467.9696326.54391020.846575.285968.276355.560128.632934.908297.843427.5931.47.6016.071446487.711.833.481367763.498.88870795.926.3416.35619.337.1256463717.5435.0134.0849201448.8183.1439945212.995.8679.16106.7352464142.83667880.9634.7932.7114.5943363203.761044153.446.171261385.899.7546674344.6910.818482.127466.283720.80234887210.815.541.3634731.509923.91172.06595.80928.5114.4714.0327.2496.882210.314648.4346.902173043410.572094.5548160.59326.223840.760624.5284160.69856.219518.52853.9529727.184.1768.7049.931220.024529.46939.511725.2989143.5714120.58138.290311.576486.3219222.81755.9768.34683823.083353.6166596.534337.581416.055225.8007679.811849.858020.053637.57555.221103.355233.7228.9644141.47758.3300119.9790840.4350166.058253.43467.6396192.214868.3388840.1234227.5740140.3036468.3293326.41081020.216575.115868.250355.583528.641634.897897.8698OpenBenchmarking.org

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mcba816243240SE +/- 0.07, N = 2SE +/- 0.15, N = 2SE +/- 5.99, N = 227.5927.5435.24MIN: 26.64 / MAX: 33.41MIN: 26.96 / MAX: 33.56MIN: 27.86 / MAX: 47.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500cba71421283531.426.027.1MAX: 890.05MAX: 873.88MAX: 934.45

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2cba3691215SE +/- 0.04, N = 2SE +/- 0.31, N = 2SE +/- 1.23, N = 27.607.939.09MIN: 7.44 / MAX: 11.6MIN: 7.51 / MAX: 11.45MIN: 7.74 / MAX: 15.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200cba4812162016.0713.6015.24MAX: 592.48MAX: 586.94MAX: 583.94

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500cba400K800K1200K1600K2000K1446487.701686943.161636128.73

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200cba369121511.8311.6313.54MAX: 836.9MAX: 860.78MAX: 856.65

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefacecba0.89331.78662.67993.57324.4665SE +/- 0.05, N = 2SE +/- 0.01, N = 2SE +/- 0.09, N = 23.483.433.97MIN: 3.32 / MAX: 8.58MIN: 3.35 / MAX: 3.83MIN: 3.5 / MAX: 7.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200cba300K600K900K1200K1500K1367763.491365831.501182440.62

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetcba3691215SE +/- 0.01, N = 2SE +/- 0.11, N = 2SE +/- 0.94, N = 28.889.0410.25MIN: 8.58 / MAX: 13.34MIN: 8.65 / MAX: 15.19MIN: 8.95 / MAX: 17.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200cba200K400K600K800K1000K870795.92978176.76898967.08

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3cba246810SE +/- 0.04, N = 2SE +/- 0.18, N = 2SE +/- 0.55, N = 26.346.557.00MIN: 6.15 / MAX: 11.57MIN: 6.24 / MAX: 7.61MIN: 6.3 / MAX: 10.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200cba4812162016.3517.2817.45MAX: 668.86MAX: 644.33MAX: 645.35

srsRAN Project

srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: Downlink Processor Benchmarkcba140280420560700SE +/- 0.35, N = 2SE +/- 27.75, N = 2SE +/- 17.85, N = 2619.3658.1657.71. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200cba91827364537.1236.1035.05MAX: 2182.81MAX: 1990.15MAX: 2157.23

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500cba13M26M39M52M65M56463717.5459505306.5556935634.55

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200cba81624324035.0136.8536.04MAX: 746.4MAX: 721.27MAX: 804.01

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500cba81624324034.0832.9234.36MAX: 699.28MAX: 728.63MAX: 704.53

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200cba11M22M33M44M55M49201448.8150045888.9851341708.85

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500cba2040608010083.1479.8381.81MAX: 2932.1MAX: 1607.86MAX: 3018.16

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200cba9M18M27M36M45M39945212.9938401769.1739287432.92

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetcba246810SE +/- 0.05, N = 2SE +/- 0.00, N = 2SE +/- 0.11, N = 25.865.906.09MIN: 5.73 / MAX: 11.67MIN: 5.81 / MAX: 12.33MIN: 5.89 / MAX: 10.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500cba2040608010079.1682.2681.20MAX: 1006.03MAX: 864.29MAX: 1009.28

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500cba20406080100106.73110.88109.38MAX: 3485.91MAX: 3569.78MAX: 3597.09

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500cba11M22M33M44M55M52464142.8350507747.1251316464.44

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200cba140K280K420K560K700K667880.96648308.27644019.72

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200cba81624324034.7933.8435.09MAX: 780.01MAX: 773.52MAX: 804.64

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500cba81624324032.7133.8633.50MAX: 725.08MAX: 659.59MAX: 690.29

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdcba48121620SE +/- 0.62, N = 2SE +/- 0.09, N = 2SE +/- 0.01, N = 214.5914.1114.17MIN: 13.37 / MAX: 277.61MIN: 13.32 / MAX: 18.63MIN: 13.53 / MAX: 18.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500cba9M18M27M36M45M43363203.7641987111.3942048733.22

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500cba200K400K600K800K1000K1044153.441069145.791038515.62

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2cba246810SE +/- 0.07, N = 2SE +/- 0.00, N = 2SE +/- 0.01, N = 26.176.266.35MIN: 6.02 / MAX: 6.83MIN: 6.11 / MAX: 12.76MIN: 6.19 / MAX: 12.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500cba300K600K900K1200K1500K1261385.891226219.881232509.19

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0cba3691215SE +/- 0.04, N = 2SE +/- 0.01, N = 2SE +/- 0.03, N = 29.759.789.98MIN: 9.58 / MAX: 13.38MIN: 9.64 / MAX: 16.01MIN: 9.82 / MAX: 10.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200cba10M20M30M40M50M46674344.6947245476.7846437377.67

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastercba3691215SE +/- 0.00, N = 2SE +/- 0.02, N = 2SE +/- 0.18, N = 210.8210.8210.651. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamcba110220330440550SE +/- 7.21, N = 2SE +/- 1.03, N = 2SE +/- 0.68, N = 2482.13486.17489.77

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamcba1530456075SE +/- 0.99, N = 2SE +/- 0.10, N = 2SE +/- 0.10, N = 266.2865.7465.28

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinycba510152025SE +/- 0.13, N = 2SE +/- 0.08, N = 2SE +/- 0.02, N = 220.8020.5120.66MIN: 20.01 / MAX: 96.45MIN: 19.87 / MAX: 24.86MIN: 20.04 / MAX: 25.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writescba50K100K150K200K250KSE +/- 669.00, N = 2SE +/- 817.50, N = 2SE +/- 633.50, N = 2234887238161236650

srsRAN Project

srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Threadcba50100150200250SE +/- 0.20, N = 2SE +/- 1.90, N = 2SE +/- 0.10, N = 2210.8208.2211.11. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50cba48121620SE +/- 0.14, N = 2SE +/- 0.10, N = 2SE +/- 0.08, N = 215.5415.3415.49MIN: 15.15 / MAX: 27.2MIN: 15.07 / MAX: 21.69MIN: 15.24 / MAX: 21.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamcba0.31010.62020.93031.24041.5505SE +/- 0.0010, N = 2SE +/- 0.0055, N = 2SE +/- 0.0205, N = 21.36341.36231.3784

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamcba160320480640800SE +/- 0.48, N = 2SE +/- 2.91, N = 2SE +/- 10.71, N = 2731.51732.11723.71

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16cba612182430SE +/- 0.08, N = 2SE +/- 0.06, N = 2SE +/- 0.05, N = 223.9123.6423.84MIN: 23.48 / MAX: 30.63MIN: 23.33 / MAX: 28.05MIN: 23.45 / MAX: 28.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamcba4080120160200SE +/- 0.80, N = 2SE +/- 0.63, N = 2SE +/- 0.98, N = 2172.07172.75173.95

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamcba1.30712.61423.92135.22846.5355SE +/- 0.0271, N = 2SE +/- 0.0212, N = 2SE +/- 0.0325, N = 25.80925.78635.7470

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18cba246810SE +/- 0.03, N = 2SE +/- 0.04, N = 2SE +/- 0.03, N = 28.518.428.50MIN: 8.3 / MAX: 13.61MIN: 8.27 / MAX: 14.6MIN: 8.33 / MAX: 14.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetcba48121620SE +/- 0.06, N = 2SE +/- 0.02, N = 2SE +/- 0.03, N = 214.4714.5314.62MIN: 14.26 / MAX: 20.57MIN: 14.31 / MAX: 24.11MIN: 14.46 / MAX: 25.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetcba48121620SE +/- 0.09, N = 2SE +/- 0.03, N = 2SE +/- 0.04, N = 214.0313.9714.11MIN: 13.68 / MAX: 18.14MIN: 13.64 / MAX: 19.68MIN: 13.76 / MAX: 19.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlycba612182430SE +/- 0.04, N = 2SE +/- 0.15, N = 2SE +/- 0.06, N = 227.2427.5027.27

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcba20406080100SE +/- 0.08, N = 2SE +/- 0.17, N = 2SE +/- 0.04, N = 296.8897.6197.36

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcba3691215SE +/- 0.01, N = 2SE +/- 0.02, N = 2SE +/- 0.00, N = 210.3110.2410.26

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformercba1122334455SE +/- 0.30, N = 2SE +/- 0.04, N = 2SE +/- 0.07, N = 248.4348.4948.79MIN: 47.33 / MAX: 85.35MIN: 47.44 / MAX: 58.53MIN: 47.65 / MAX: 78.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamcba1122334455SE +/- 0.02, N = 2SE +/- 0.03, N = 2SE +/- 0.10, N = 246.9046.5746.72

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metriccba160K320K480K640K800KSE +/- 963.50, N = 2SE +/- 357.50, N = 2SE +/- 1805.50, N = 27304347298767343861. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamcba3691215SE +/- 0.02, N = 2SE +/- 0.07, N = 2SE +/- 0.01, N = 210.5710.6310.58

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamcba20406080100SE +/- 0.18, N = 2SE +/- 0.64, N = 2SE +/- 0.05, N = 294.5594.0494.51

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamcba4080120160200SE +/- 0.80, N = 2SE +/- 0.22, N = 2SE +/- 0.43, N = 2160.59159.75159.85

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamcba246810SE +/- 0.0308, N = 2SE +/- 0.0087, N = 2SE +/- 0.0166, N = 26.22386.25666.2525

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamcba918273645SE +/- 0.01, N = 2SE +/- 0.01, N = 2SE +/- 0.01, N = 240.7640.7040.90

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamcba612182430SE +/- 0.00, N = 2SE +/- 0.01, N = 2SE +/- 0.01, N = 224.5324.5724.44

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamcba4080120160200SE +/- 0.62, N = 2SE +/- 0.32, N = 2SE +/- 0.24, N = 2160.70159.92159.97

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamcba246810SE +/- 0.0244, N = 2SE +/- 0.0121, N = 2SE +/- 0.0097, N = 26.21956.24956.2481

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamcba510152025SE +/- 0.03, N = 2SE +/- 0.00, N = 2SE +/- 0.03, N = 218.5318.6118.59

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamcba1224364860SE +/- 0.10, N = 2SE +/- 0.01, N = 2SE +/- 0.08, N = 253.9553.7053.78

srsRAN Project

srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Totalcba2K4K6K8K10KSE +/- 56.45, N = 2SE +/- 44.35, N = 2SE +/- 13.30, N = 29727.19718.69682.11. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlycba20406080100SE +/- 0.04, N = 2SE +/- 0.14, N = 2SE +/- 0.40, N = 284.1784.3584.55

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlycba1530456075SE +/- 0.13, N = 2SE +/- 0.03, N = 2SE +/- 0.14, N = 268.7068.5068.80

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamcba1122334455SE +/- 0.05, N = 2SE +/- 0.11, N = 2SE +/- 0.02, N = 249.9349.9250.13

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamcba510152025SE +/- 0.02, N = 2SE +/- 0.04, N = 2SE +/- 0.01, N = 220.0220.0319.95

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastercba714212835SE +/- 0.07, N = 2SE +/- 0.11, N = 2SE +/- 0.07, N = 229.4729.3929.351. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamcba918273645SE +/- 0.01, N = 2SE +/- 0.11, N = 2SE +/- 0.06, N = 239.5139.5639.42

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamcba612182430SE +/- 0.01, N = 2SE +/- 0.07, N = 2SE +/- 0.04, N = 225.3025.2725.36

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamcba306090120150SE +/- 0.03, N = 2SE +/- 0.18, N = 2SE +/- 0.01, N = 2143.57143.21143.07

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamcba306090120150SE +/- 0.08, N = 2SE +/- 0.02, N = 2SE +/- 0.19, N = 2120.58120.18120.36

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamcba246810SE +/- 0.0053, N = 2SE +/- 0.0014, N = 2SE +/- 0.0127, N = 28.29038.31828.3056

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamcba3691215SE +/- 0.06, N = 2SE +/- 0.01, N = 2SE +/- 0.08, N = 211.5811.5711.54

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamcba20406080100SE +/- 0.42, N = 2SE +/- 0.08, N = 2SE +/- 0.59, N = 286.3286.3986.61

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamcba50100150200250SE +/- 0.05, N = 2SE +/- 0.26, N = 2SE +/- 0.03, N = 2222.82223.37223.47

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastcba1.34842.69684.04525.39366.742SE +/- 0.002, N = 2SE +/- 0.006, N = 2SE +/- 0.001, N = 25.9765.9935.9911. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamcba246810SE +/- 0.0366, N = 2SE +/- 0.0212, N = 2SE +/- 0.0013, N = 28.34688.34418.3665

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamcba8001600240032004000SE +/- 17.23, N = 2SE +/- 10.54, N = 2SE +/- 0.54, N = 23823.083824.303814.52

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamcba1224364860SE +/- 0.00, N = 2SE +/- 0.05, N = 2SE +/- 0.00, N = 253.6253.5553.48

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamcba130260390520650SE +/- 0.01, N = 2SE +/- 0.12, N = 2SE +/- 0.05, N = 2596.53596.81597.97

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamcba918273645SE +/- 0.02, N = 2SE +/- 0.01, N = 2SE +/- 0.09, N = 237.5837.5337.61

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastcba48121620SE +/- 0.02, N = 2SE +/- 0.02, N = 2SE +/- 0.02, N = 216.0616.0916.081. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamcba50100150200250SE +/- 0.22, N = 2SE +/- 0.10, N = 2SE +/- 0.21, N = 2225.80225.46225.31

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamcba150300450600750SE +/- 0.34, N = 2SE +/- 0.17, N = 2SE +/- 0.37, N = 2679.81679.82681.28

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcba1122334455SE +/- 0.06, N = 2SE +/- 0.00, N = 2SE +/- 0.05, N = 249.8649.9149.81

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcba510152025SE +/- 0.02, N = 2SE +/- 0.00, N = 2SE +/- 0.02, N = 220.0520.0320.07

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamcba918273645SE +/- 0.02, N = 2SE +/- 0.01, N = 2SE +/- 0.04, N = 237.5837.5437.61

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetcba1.17682.35363.53044.70725.884SE +/- 0.02, N = 2SE +/- 0.02, N = 2SE +/- 0.01, N = 25.225.225.23MIN: 5.12 / MAX: 7.84MIN: 5.11 / MAX: 5.77MIN: 5.12 / MAX: 11.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamcba2004006008001000SE +/- 0.21, N = 2SE +/- 1.15, N = 2SE +/- 1.07, N = 21103.361104.041105.38

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlycba816243240SE +/- 0.01, N = 2SE +/- 0.25, N = 2SE +/- 0.02, N = 233.7233.7633.70

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamcba714212835SE +/- 0.01, N = 2SE +/- 0.03, N = 2SE +/- 0.03, N = 228.9628.9528.92

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamcba306090120150SE +/- 0.05, N = 2SE +/- 0.09, N = 2SE +/- 0.13, N = 2141.48141.62141.70

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamcba246810SE +/- 0.0003, N = 2SE +/- 0.0012, N = 2SE +/- 0.0056, N = 28.33008.33688.3423

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamcba306090120150SE +/- 0.01, N = 2SE +/- 0.02, N = 2SE +/- 0.08, N = 2119.98119.88119.80

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamcba2004006008001000SE +/- 0.44, N = 2SE +/- 0.01, N = 2SE +/- 0.49, N = 2840.44840.26841.48

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamcba4080120160200SE +/- 0.08, N = 2SE +/- 0.06, N = 2SE +/- 0.05, N = 2166.06166.22166.00

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlycba60120180240300SE +/- 0.52, N = 2SE +/- 0.23, N = 2SE +/- 0.11, N = 2253.43253.77253.49

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamcba100200300400500SE +/- 0.26, N = 2SE +/- 1.23, N = 2SE +/- 0.36, N = 2467.64468.11467.98

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamcba4080120160200SE +/- 0.21, N = 2SE +/- 0.00, N = 2SE +/- 0.06, N = 2192.21192.16192.35

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamcba1530456075SE +/- 0.01, N = 2SE +/- 0.13, N = 2SE +/- 0.04, N = 268.3468.2768.32

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamcba2004006008001000SE +/- 0.59, N = 2SE +/- 0.16, N = 2SE +/- 1.02, N = 2840.12840.95840.54

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamcba50100150200250SE +/- 0.22, N = 2SE +/- 0.19, N = 2SE +/- 0.11, N = 2227.57227.64227.42

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamcba306090120150SE +/- 0.23, N = 2SE +/- 0.12, N = 2SE +/- 0.07, N = 2140.30140.26140.40

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamcba100200300400500SE +/- 0.09, N = 2SE +/- 0.62, N = 2SE +/- 0.19, N = 2468.33467.97468.14

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamcba70140210280350SE +/- 0.49, N = 2SE +/- 0.48, N = 2SE +/- 0.12, N = 2326.41326.54326.31

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC) open-source compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compilecba2004006008001000SE +/- 0.66, N = 2SE +/- 0.03, N = 2SE +/- 1.81, N = 21020.221020.851020.13

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamcba120240360480600SE +/- 0.31, N = 2SE +/- 0.68, N = 2SE +/- 0.03, N = 2575.12575.29574.96

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamcba1530456075SE +/- 0.01, N = 2SE +/- 0.04, N = 2SE +/- 0.04, N = 268.2568.2868.29

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamcba1224364860SE +/- 0.04, N = 2SE +/- 0.06, N = 2SE +/- 0.03, N = 255.5855.5655.58

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamcba714212835SE +/- 0.02, N = 2SE +/- 0.04, N = 2SE +/- 0.03, N = 228.6428.6328.63

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamcba816243240SE +/- 0.03, N = 2SE +/- 0.05, N = 2SE +/- 0.03, N = 234.9034.9134.91

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamcba20406080100SE +/- 0.16, N = 2SE +/- 0.15, N = 2SE +/- 0.00, N = 297.8797.8497.87

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 500 - Inserts: 3000 - Rounds: 30a50010001500200025002390.931. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 500 - Inserts: 1000 - Rounds: 30a70140210280350SE +/- 8.78, N = 2339.971. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 3000 - Rounds: 30a120240360480600SE +/- 0.52, N = 2572.131. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 1000 - Rounds: 30a4080120160200SE +/- 0.64, N = 2169.511. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 3000 - Rounds: 30a80160240320400SE +/- 0.25, N = 2346.091. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 1000 - Rounds: 30a20406080100SE +/- 0.50, N = 2101.581. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

122 Results Shown

NCNN
Apache IoTDB
NCNN
Apache IoTDB:
  200 - 1 - 200
  500 - 1 - 500
  500 - 1 - 200
NCNN
Apache IoTDB
NCNN
Apache IoTDB
NCNN
Apache IoTDB
srsRAN Project
Apache IoTDB:
  500 - 100 - 200
  500 - 100 - 500
  100 - 100 - 200
  100 - 1 - 500
  500 - 100 - 200
  500 - 100 - 500
  100 - 100 - 200
NCNN
Apache IoTDB:
  100 - 100 - 500
  200 - 100 - 500
  100 - 100 - 500
  100 - 1 - 200
  200 - 100 - 200
  200 - 1 - 500
NCNN
Apache IoTDB:
  200 - 100 - 500
  100 - 1 - 500
NCNN
Apache IoTDB
NCNN
Apache IoTDB
VVenC
Neural Magic DeepSparse:
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream:
    items/sec
    ms/batch
NCNN
Apache Cassandra
srsRAN Project
NCNN
Neural Magic DeepSparse:
  ResNet-50, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
NCNN
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
NCNN:
  CPU - resnet18
  CPU - googlenet
  CPU - mobilenet
Blender
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    items/sec
    ms/batch
NCNN
Neural Magic DeepSparse
BRL-CAD
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Synchronous Single-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
srsRAN Project
Blender:
  Pabellon Barcelona - CPU-Only
  Classroom - CPU-Only
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
VVenC
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    ms/batch
VVenC
Neural Magic DeepSparse:
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
VVenC
Neural Magic DeepSparse:
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
NCNN
Neural Magic DeepSparse
Blender
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
Blender
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream
  ResNet-50, Baseline - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
Timed GCC Compilation
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream
  ResNet-50, Baseline - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
Apache CouchDB:
  500 - 3000 - 30
  500 - 1000 - 30
  300 - 3000 - 30
  300 - 1000 - 30
  100 - 3000 - 30
  100 - 1000 - 30