8490h 1s

Intel Xeon Platinum 8490H testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2307296-NE-8490H1S1663
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

CPU Massive 2 Tests
Creator Workloads 2 Tests
Database Test Suite 3 Tests
Multi-Core 2 Tests
Server 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
July 28 2023
  1 Hour, 52 Minutes
b
July 28 2023
  2 Hours, 53 Minutes
c
July 28 2023
  1 Hour, 26 Minutes
d
July 28 2023
  1 Hour, 25 Minutes
e
July 29 2023
  1 Hour, 25 Minutes
Invert Hiding All Results Option
  1 Hour, 48 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


8490h 1sOpenBenchmarking.orgPhoronix Test SuiteIntel Xeon Platinum 8490H @ 3.50GHz (60 Cores / 120 Threads)Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS)Intel Device 1bce512GB3 x 3841GB Micron_9300_MTFDHAL3T8TDPASPEED4 x Intel E810-C for QSFPUbuntu 22.045.15.0-47-generic (x86_64)GNOME Shell 42.4X Server 1.21.1.31.2.204GCC 11.2.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen Resolution8490h 1s PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0 - OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)- Python 3.10.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

abcdeResult OverviewPhoronix Test Suite100%104%108%112%116%Apache CassandraRedis 7.0.12 + memtier_benchmarkDragonflydbBRL-CADNeural Magic DeepSparseBlender

8490h 1sdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamcryptopp: All Algorithmscryptopp: Keyed Algorithmscryptopp: Unkeyed Algorithmscassandra: Writesdragonflydb: 10 - 1:5dragonflydb: 10 - 1:10dragonflydb: 10 - 1:100memtier-benchmark: Redis - 50 - 1:1memtier-benchmark: Redis - 50 - 1:5memtier-benchmark: Redis - 100 - 1:1memtier-benchmark: Redis - 100 - 1:5memtier-benchmark: Redis - 50 - 1:10memtier-benchmark: Redis - 100 - 1:10brl-cad: VGR Performance Metricdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyabcde56.119335.49111905.9808208.9561648.3175180.141192.503386.93780.4996272.79915672.7993759.4847346.9388184.386762.57521.1647780.3982267.9943348.5304183.6332475.7181121.886475.603439.9427868.4975117.2266218.111561.647257.07635.7641663.920955595.365201452.34373413469414247982.4514662941.4214338166.152462536.662613416.652503378.332627400.982646929.552929613.17825917534.501428.170215.7034.783446.24755.5454155.801311.492838.41273.66085.2681.314486.42915.4182479.12547.240338.39443.727186.03935.443163.00658.1989396.73425.001534.49048.5248137.508416.2145525.138527.955225.7468.9335.75272.9188.7357.789235.36131891.0565209.1021644.0495179.6816191.993587.1582782.0400270.22955678.5152746.7073347.6552185.745361.595621.1611781.3024270.8536348.7206183.9268487.4646123.678175.881839.9583869.4702114.6147224.164861.891357.570235.719913493214476834.1014292262.5214432034.212470315.912546591.862630752.032710324.972544897.582583878.29823602517.163428.273515.83164.780246.54405.5592156.210911.465338.33713.69615.26271.337186.25085.3786486.185147.250438.37273.687185.93325.434061.51028.0804395.209924.991734.46348.7193133.77916.1504520.869527.989725.6369.1635.26273.1488.557.925335.62231891.327208.4283649.2977178.7189191.610486.786780.5294268.1465677.035758.3359348.0117184.196661.414321.0871780.3312271.6229347.5917183.819487.1764124.622475.61339.9318870.7567116.3483220.488861.608857.991835.855512170814235868.6114204640.2514307949.032377430.142516320.692460387.752554516.652508415.592611551.59820410517.837528.065915.82574.795646.17815.5886156.461511.511838.41043.72485.26321.316786.16415.4239488.245547.415138.40543.67786.27595.437261.5468.019396.682825.007134.41938.5903135.998416.2241516.041827.88425.6970.0335.03272.6488.8358.250835.59251890.5705209.6555648.3864180.4386190.818187.3401780.3141270.34815665.4699759.4253346.5606184.931962.174821.0613778.7629271.7797347.5128184.047485.7154123.454375.552339.8698871.2052117.2235229.447261.828358.212935.756914093414750102.5214205235.7114571297.772383147.072452685.992493158.972549066.222487096.512593991.1812806511.418628.090115.8384.767546.24365.5358157.175611.441838.4223.69385.27431.314686.52535.402482.435447.473238.49113.674386.29395.430761.69358.0945396.993625.045434.40178.5255130.515316.1666514.898227.960725.7569.1635.16272.4188.1857.212135.09441905.1491205.2428633.4455177.0105191.497185.159783.198266.34445679.7805709.1549347.7604181.199361.069620.736782.5439264.3062348.1392181.1289488.2494120.440975.712439.7368870.6664113.2419217.718461.845857.523635.881613784914392511.7914390358.9914492478.362414601.072444450.112474272.952541163.682540848.352755166.14822977520.223928.48815.71684.869947.31695.642156.556211.734538.27963.74995.26151.407686.16635.5134488.234748.218638.3123.778886.12615.518361.41948.2973396.165125.13234.42268.8247137.761416.1614519.268627.863725.7169.2535.5272.4488.09OpenBenchmarking.org

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Clients Per Thread: 20 - Set To Get Ratio: 1:5

a: The test run did not produce a result. E: Connection error: Connection reset by peer

b: The test run did not produce a result. E: Connection error: Connection reset by peer

c: The test run did not produce a result. E: Connection error: Connection reset by peer

d: The test run did not produce a result. E: Connection error: Connection reset by peer

e: The test run did not produce a result. E: Connection error: Connection reset by peer

Clients Per Thread: 20 - Set To Get Ratio: 1:10

a: The test run did not produce a result. E: Connection error: Connection reset by peer

b: The test run did not produce a result. E: Connection error: Connection reset by peer

c: The test run did not produce a result. E: Connection error: Connection reset by peer

d: The test run did not produce a result. E: Connection error: Connection refused

e: The test run did not produce a result. E: Connection error: Connection reset by peer

Clients Per Thread: 20 - Set To Get Ratio: 1:100

a: The test run did not produce a result. E: Connection error: Connection refused

b: The test run did not produce a result. E: Connection error: Connection reset by peer

c: The test run did not produce a result. E: Connection error: Connection reset by peer

d: The test run did not produce a result. E: Connection error: Connection refused

e: The test run did not produce a result. E: Connection error: Connection reset by peer

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1

a: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

b: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

c: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

d: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

e: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5

a: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

b: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

c: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

d: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

e: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10

a: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

b: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

c: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

d: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

e: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamedcba1326395265SE +/- 0.18, N = 357.2158.2557.9357.7956.12

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamedcba816243240SE +/- 0.07, N = 335.0935.5935.6235.3635.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamedcba400800120016002000SE +/- 1.24, N = 31905.151890.571891.331891.061905.98

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamedcba50100150200250SE +/- 0.71, N = 3205.24209.66208.43209.10208.96

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamedcba140280420560700SE +/- 5.86, N = 3633.45648.39649.30644.05648.32

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamedcba4080120160200SE +/- 0.21, N = 3177.01180.44178.72179.68180.14

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamedcba4080120160200SE +/- 0.10, N = 3191.50190.82191.61191.99192.50

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamedcba20406080100SE +/- 0.09, N = 385.1687.3486.7987.1686.93

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamedcba2004006008001000SE +/- 0.80, N = 3783.20780.31780.53782.04780.50

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamedcba60120180240300SE +/- 0.64, N = 3266.34270.35268.15270.23272.80

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamedcba12002400360048006000SE +/- 3.30, N = 35679.785665.475677.045678.525672.80

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamedcba160320480640800SE +/- 0.65, N = 3709.15759.43758.34746.71759.48

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamedcba80160240320400SE +/- 0.33, N = 3347.76346.56348.01347.66346.94

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamedcba4080120160200SE +/- 0.69, N = 3181.20184.93184.20185.75184.39

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamedcba1428425670SE +/- 0.44, N = 361.0762.1761.4161.6062.58

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamedcba510152025SE +/- 0.07, N = 320.7421.0621.0921.1621.16

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamedcba2004006008001000SE +/- 0.42, N = 3782.54778.76780.33781.30780.40

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamedcba60120180240300SE +/- 0.45, N = 3264.31271.78271.62270.85267.99

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamedcba80160240320400SE +/- 0.57, N = 3348.14347.51347.59348.72348.53

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamedcba4080120160200SE +/- 0.31, N = 3181.13184.05183.82183.93183.63

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamedcba110220330440550SE +/- 0.85, N = 3488.25485.72487.18487.46475.72

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamedcba306090120150SE +/- 0.43, N = 3120.44123.45124.62123.68121.89

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamedcba20406080100SE +/- 0.23, N = 375.7175.5575.6175.8875.60

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamedcba918273645SE +/- 0.01, N = 339.7439.8739.9339.9639.94

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamedcba2004006008001000SE +/- 0.53, N = 3870.67871.21870.76869.47868.50

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamedcba306090120150113.24117.22116.35114.61117.23

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamedcba50100150200250217.72229.45220.49224.16218.11

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamedcba142842567061.8561.8361.6161.8961.65

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamedcba132639526557.5258.2157.9957.5757.08

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamedcba81624324035.8835.7635.8635.7235.76

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.8Test: All Algorithmsa4008001200160020001663.921. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.8Test: Keyed Algorithmsa130260390520650595.371. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.8Test: Unkeyed Algorithmsa100200300400500452.341. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesedcba30K60K90K120K150K137849140934121708134932134694

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:5edcba3M6M9M12M15MSE +/- 71046.49, N = 314392511.7914750102.5214235868.6114476834.1014247982.451. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:10edcba3M6M9M12M15MSE +/- 31999.75, N = 314390358.9914205235.7114204640.2514292262.5214662941.421. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:100edcba3M6M9M12M15MSE +/- 145332.09, N = 314492478.3614571297.7714307949.0314432034.2114338166.151. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1edcba500K1000K1500K2000K2500K2414601.072383147.072377430.142470315.912462536.661. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5edcba600K1200K1800K2400K3000K2444450.112452685.992516320.692546591.862613416.651. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1edcba600K1200K1800K2400K3000K2474272.952493158.972460387.752630752.032503378.331. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5edcba600K1200K1800K2400K3000K2541163.682549066.222554516.652710324.972627400.981. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10edcba600K1200K1800K2400K3000K2540848.352487096.512508415.592544897.582646929.551. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10edcba600K1200K1800K2400K3000K2755166.142593991.102611551.592583878.292929613.171. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricedcba200K400K600K800K1000K8229778128068204108236028259171. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamedcba120240360480600SE +/- 1.23, N = 3520.22511.42517.84517.16534.50

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamedcba714212835SE +/- 0.05, N = 328.4928.0928.0728.2728.17

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamedcba48121620SE +/- 0.01, N = 315.7215.8415.8315.8315.70

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamedcba1.09572.19143.28714.38285.4785SE +/- 0.0162, N = 34.86994.76754.79564.78024.7834

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamedcba1122334455SE +/- 0.41, N = 347.3246.2446.1846.5446.25

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamedcba1.26952.5393.80855.0786.3475SE +/- 0.0064, N = 35.64205.53585.58865.55925.5454

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamedcba306090120150SE +/- 0.08, N = 3156.56157.18156.46156.21155.80

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamedcba3691215SE +/- 0.01, N = 311.7311.4411.5111.4711.49

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamedcba918273645SE +/- 0.04, N = 338.2838.4238.4138.3438.41

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamedcba0.84371.68742.53113.37484.2185SE +/- 0.0089, N = 33.74993.69383.72483.69613.6608

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamedcba1.18672.37343.56014.74685.9335SE +/- 0.0032, N = 35.26155.27435.26325.26275.2680

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamedcba0.31670.63340.95011.26681.5835SE +/- 0.0011, N = 31.40761.31461.31671.33711.3144

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamedcba20406080100SE +/- 0.08, N = 386.1786.5386.1686.2586.43

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamedcba1.24052.4813.72154.9626.2025SE +/- 0.0199, N = 35.51345.40205.42395.37865.4182

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamedcba110220330440550SE +/- 2.68, N = 3488.23482.44488.25486.19479.13

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Streamedcba1122334455SE +/- 0.16, N = 348.2247.4747.4247.2547.24

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamedcba918273645SE +/- 0.02, N = 338.3138.4938.4138.3738.39

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamedcba0.85021.70042.55063.40084.251SE +/- 0.0060, N = 33.77883.67433.67703.68713.7271

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamedcba20406080100SE +/- 0.14, N = 386.1386.2986.2885.9386.04

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamedcba1.24162.48323.72484.96646.208SE +/- 0.0092, N = 35.51835.43075.43725.43405.4431

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamedcba1428425670SE +/- 0.11, N = 361.4261.6961.5561.5163.01

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamedcba246810SE +/- 0.0280, N = 38.29738.09458.01908.08048.1989

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamedcba90180270360450SE +/- 1.22, N = 3396.17396.99396.68395.21396.73

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamedcba612182430SE +/- 0.00, N = 325.1325.0525.0124.9925.00

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamedcba816243240SE +/- 0.02, N = 334.4234.4034.4234.4634.49

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamedcba2468108.82478.52558.59038.71938.5248

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamedcba306090120150137.76130.52136.00133.78137.51

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamedcba4812162016.1616.1716.2216.1516.21

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamedcba110220330440550519.27514.90516.04520.87525.14

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamedcba71421283527.8627.9627.8827.9927.96

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlyedcba61218243025.7125.7525.6925.6325.74

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlyedcba163248648069.2569.1670.0369.1668.93

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlyedcba81624324035.5035.1635.0335.2635.75

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlyedcba60120180240300272.44272.41272.64273.14272.91

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlyedcba2040608010088.0988.1888.8388.5088.73

79 Results Shown

Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream
  ResNet-50, Baseline - Asynchronous Multi-Stream
  ResNet-50, Baseline - Synchronous Single-Stream
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream
  ResNet-50, Sparse INT8 - Synchronous Single-Stream
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering - Synchronous Single-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream
Crypto++:
  All Algorithms
  Keyed Algorithms
  Unkeyed Algorithms
Apache Cassandra
Dragonflydb:
  10 - 1:5
  10 - 1:10
  10 - 1:100
Redis 7.0.12 + memtier_benchmark:
  Redis - 50 - 1:1
  Redis - 50 - 1:5
  Redis - 100 - 1:1
  Redis - 100 - 1:5
  Redis - 50 - 1:10
  Redis - 100 - 1:10
BRL-CAD
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream
  ResNet-50, Baseline - Asynchronous Multi-Stream
  ResNet-50, Baseline - Synchronous Single-Stream
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream
  ResNet-50, Sparse INT8 - Synchronous Single-Stream
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering - Synchronous Single-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
  Fishy Cat - CPU-Only
  Barbershop - CPU-Only
  Pabellon Barcelona - CPU-Only