dds

AMD Ryzen 7 PRO 5850U testing with a HP 8A78 (F.04 BIOS) and AMD Cezanne 512MB on Pop 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2302062-NE-DDS21250757&sor.

ddsProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionsbcAMD Ryzen 7 PRO 5850U @ 4.51GHz (8 Cores / 16 Threads)HP 8A78 (F.04 BIOS)AMD Renoir/Cezanne16GB1024GB SK hynix PC711 HFS001TDE9X073NAMD Cezanne 512MB (2000/400MHz)AMD Renoir Radeon HD AudioRealtek RTL8822CE 802.11ac PCIePop 22.045.19.0-76051900-generic (x86_64)GNOME Shell 42.3.1X Server 1.21.1.34.6 Mesa 22.0.5 (LLVM 13.0.1 DRM 3.47)1.3.204GCC 11.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate schedutil (Boost: Enabled) - CPU Microcode: 0xa50000c Graphics Details- GLAMOR - BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-CEZANNE-018Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ddsetlegacy: 1920 x 1080unvanquished: 1920 x 1080 - Highunvanquished: 1920 x 1080 - Ultraunvanquished: 1920 x 1080 - Mediumvvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fasterclickhouse: 100M Rows Hits Dataset, First Run / Cold Cacheclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, Third Runspark: 1000000 - 100 - SHA-512 Benchmark Timespark: 1000000 - 100 - Calculate Pi Benchmarkspark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframespark: 1000000 - 100 - Group By Test Timespark: 1000000 - 100 - Repartition Test Timespark: 1000000 - 100 - Inner Join Test Timespark: 1000000 - 100 - Broadcast Inner Join Test Timememcached: 1:1memcached: 1:5memcached: 1:10memcached: 1:100keydb: GET - 50keydb: SET - 50keydb: LPOP - 50keydb: SADD - 50keydb: HMSET - 50keydb: LPUSH - 50deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamopenems: pyEMS Coupleropenems: openEMS MSL_NotchFilterrocksdb: Rand Fillrocksdb: Rand Readrocksdb: Update Randrocksdb: Seq Fillrocksdb: Rand Fill Syncrocksdb: Read While Writingrocksdb: Read Rand Write Randsbc157.8230.8177.1277.32.2544.8816.48515.59290.16100.05102.134.24233.01879920712.954.343.262.421.9907034681830535.331466197.671331770.611243196.211893222.2518592892094328.51819041.75700928.061639720.624.4326897.7174.3484229.958660.10566.526449.839820.052718.0546221.414616.170961.826726.7203149.647524.834540.251159.585867.110853.554818.658239.0949102.297334.933228.61766.678598.94886.5375152.945719.4215205.936717.34557.64334.4669895.44474.344230.194728.472.46733471358313323693579888001518816061381221543169.4206.9176.3282.72.2574.846.47915.41990.73100.50102.824.31234.51200621212.8892484714.293.082.341.941835865.541455028.551322702.521247617.091441420.622049264.252481759.251482755.5698753.441509342.884.4792892.98894.3458230.094560.832865.731549.437920.216318.2789218.424116.110562.05926.7084149.707724.902540.141159.231667.511453.732618.596439.5499101.119734.975528.58266.6499601.48266.5335153.037619.4202205.950217.323457.71464.4933890.1874.3485229.954928.1475.097342733550496137099910059831515016472521234760171.8220.8179.3230.32.2584.8286.47515.58292.90101.23101.364.60232.54419093513.0529647684.413.262.3976175231.921812832.851449002.161359271.061269999.4425438821958326.881872168.252163659.25698597.251662123.54.4656895.31094.3407230.364561.648464.863249.700820.109118.186219.616716.19561.734626.7634149.401424.871640.191759.515867.188953.572418.651239.5839101.032234.962828.59196.6511601.37276.5336153.037919.3873206.300517.314957.7444.4681895.2024.3555229.581928.3473.38732871359102343693639940601519616569091233032OpenBenchmarking.org

ET: Legacy

Resolution: 1920 x 1080

OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.81Resolution: 1920 x 1080cbs4080120160200171.8169.4157.8

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: High

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.54Resolution: 1920 x 1080 - Effects Quality: Highscb50100150200250230.8220.8206.9

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: Ultra

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.54Resolution: 1920 x 1080 - Effects Quality: Ultracsb4080120160200179.3177.1176.3

Unvanquished

Resolution: 1920 x 1080 - Effects Quality: Medium

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.54Resolution: 1920 x 1080 - Effects Quality: Mediumbsc60120180240300282.7277.3230.3

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastcbs0.50811.01621.52432.03242.54052.2582.2572.2541. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 4K - Video Preset: Fastersbc1.09822.19643.29464.39285.4914.8814.8404.8281. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fastsbc2468106.4856.4796.4751. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.7Video Input: Bosphorus 1080p - Video Preset: Fasterscb4812162015.5915.5815.421. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

ClickHouse

100M Rows Hits Dataset, First Run / Cold Cache

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold Cachecbs2040608010092.9090.7390.16MIN: 5.01 / MAX: 5000MIN: 5.02 / MAX: 5454.55MIN: 5.01 / MAX: 5454.55

ClickHouse

100M Rows Hits Dataset, Second Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second Runcbs20406080100101.23100.50100.05MIN: 5.12 / MAX: 6666.67MIN: 5.11 / MAX: 6666.67MIN: 5.12 / MAX: 6666.67

ClickHouse

100M Rows Hits Dataset, Third Run

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third Runbsc20406080100102.82102.13101.36MIN: 5.13 / MAX: 6666.67MIN: 5.12 / MAX: 6000MIN: 5.1 / MAX: 6000

Apache Spark

Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Timesbc1.0352.073.1054.145.1754.244.314.60

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmarkcsb50100150200250232.54233.02234.51

Apache Spark

Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframebsc369121512.8912.9513.05

Apache Spark

Row Count: 1000000 - Partitions: 100 - Group By Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Group By Test Timebsc0.99231.98462.97693.96924.96154.294.344.41

Apache Spark

Row Count: 1000000 - Partitions: 100 - Repartition Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Repartition Test Timebsc0.73351.4672.20052.9343.66753.083.263.26

Apache Spark

Row Count: 1000000 - Partitions: 100 - Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Inner Join Test Timebcs0.54451.0891.63352.1782.72252.3400000002.3976175232.420000000

Apache Spark

Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Timecbs0.44790.89581.34371.79162.23951.9200000001.9400000001.990703468

Memcached

Set To Get Ratio: 1:1

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:1bsc400K800K1200K1600K2000K1835865.541830535.331812832.851. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:5sbc300K600K900K1200K1500K1466197.671455028.551449002.161. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:10csb300K600K900K1200K1500K1359271.061331770.611322702.521. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Memcached

Set To Get Ratio: 1:100

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.18Set To Get Ratio: 1:100cbs300K600K900K1200K1500K1269999.441247617.091243196.211. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

KeyDB

Test: GET - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterKeyDB 6.3.2Test: GET - Parallel Connections: 50csb500K1000K1500K2000K2500K2543882.001893222.251441420.621. (CXX) g++ options: -ggdb -rdynamic -lm -lz -lcrypto -lbz2 -lzstd -llz4 -lsnappy -latomic -ldl -pthread -lrt -luuid -lcurl -lssl -std=c++14 -pedantic -fno-rtti -O2 -flto -MMD

KeyDB

Test: SET - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterKeyDB 6.3.2Test: SET - Parallel Connections: 50bcs400K800K1200K1600K2000K2049264.251958326.881859289.001. (CXX) g++ options: -ggdb -rdynamic -lm -lz -lcrypto -lbz2 -lzstd -llz4 -lsnappy -latomic -ldl -pthread -lrt -luuid -lcurl -lssl -std=c++14 -pedantic -fno-rtti -O2 -flto -MMD

KeyDB

Test: LPOP - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterKeyDB 6.3.2Test: LPOP - Parallel Connections: 50bsc500K1000K1500K2000K2500K2481759.252094328.501872168.251. (CXX) g++ options: -ggdb -rdynamic -lm -lz -lcrypto -lbz2 -lzstd -llz4 -lsnappy -latomic -ldl -pthread -lrt -luuid -lcurl -lssl -std=c++14 -pedantic -fno-rtti -O2 -flto -MMD

KeyDB

Test: SADD - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterKeyDB 6.3.2Test: SADD - Parallel Connections: 50csb500K1000K1500K2000K2500K2163659.251819041.751482755.501. (CXX) g++ options: -ggdb -rdynamic -lm -lz -lcrypto -lbz2 -lzstd -llz4 -lsnappy -latomic -ldl -pthread -lrt -luuid -lcurl -lssl -std=c++14 -pedantic -fno-rtti -O2 -flto -MMD

KeyDB

Test: HMSET - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterKeyDB 6.3.2Test: HMSET - Parallel Connections: 50sbc150K300K450K600K750K700928.06698753.44698597.251. (CXX) g++ options: -ggdb -rdynamic -lm -lz -lcrypto -lbz2 -lzstd -llz4 -lsnappy -latomic -ldl -pthread -lrt -luuid -lcurl -lssl -std=c++14 -pedantic -fno-rtti -O2 -flto -MMD

KeyDB

Test: LPUSH - Parallel Connections: 50

OpenBenchmarking.orgRequests Per Second, More Is BetterKeyDB 6.3.2Test: LPUSH - Parallel Connections: 50csb400K800K1200K1600K2000K1662123.501639720.621509342.881. (CXX) g++ options: -ggdb -rdynamic -lm -lz -lcrypto -lbz2 -lzstd -llz4 -lsnappy -latomic -ldl -pthread -lrt -luuid -lcurl -lssl -std=c++14 -pedantic -fno-rtti -O2 -flto -MMD

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambcs1.00782.01563.02344.03125.0394.47924.46564.4326

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streambcs2004006008001000892.99895.31897.72

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamsbc0.97841.95682.93523.91364.8924.34844.34584.3407

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamsbc50100150200250229.96230.09230.36

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamcbs142842567061.6560.8360.11

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamcbs153045607564.8665.7366.53

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamscb112233445549.8449.7049.44

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamscb51015202520.0520.1120.22

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streambcs4812162018.2818.1918.05

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streambcs50100150200250218.42219.62221.41

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamcsb4812162016.2016.1716.11

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamcsb142842567061.7361.8362.06

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamcsb61218243026.7626.7226.71

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamcsb306090120150149.40149.65149.71

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streambcs61218243024.9024.8724.83

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streambcs91827364540.1440.1940.25

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamscb132639526559.5959.5259.23

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamscb153045607567.1167.1967.51

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streambcs122436486053.7353.5753.55

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streambcs51015202518.6018.6518.66

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamcbs91827364539.5839.5539.09

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamcbs20406080100101.03101.12102.30

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streambcs81624324034.9834.9634.93

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streambcs71421283528.5828.5928.62

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamscb2468106.67806.65116.6499

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamscb130260390520650598.95601.37601.48

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamscb2468106.53756.53366.5335

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamsbc306090120150152.95153.04153.04

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamsbc51015202519.4219.4219.39

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamsbc50100150200250205.94205.95206.30

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamsbc4812162017.3517.3217.31

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamsbc132639526557.6457.7157.74

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambcs1.0112.0223.0334.0445.0554.49334.46814.4669

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streambcs2004006008001000890.19895.20895.44

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcbs0.981.962.943.924.94.35554.34854.3440

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.3.2Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcbs50100150200250229.58229.95230.19

OpenEMS

Test: pyEMS Coupler

OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: pyEMS Couplerscb71421283528.4028.3428.141. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat

OpenEMS

Test: openEMS MSL_NotchFilter

OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: openEMS MSL_NotchFilterbcs2040608010075.0973.3872.461. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat

RocksDB

Test: Random Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fillbsc160K320K480K640K800K7342737334717328711. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Readcsb8M16M24M32M40M3591023435831332355049611. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update Randombcs80K160K240K320K400K3709993693633693571. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Sequential Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential Fillbcs200K400K600K800K1000K10059839940609888001. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Fill Sync

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill Synccsb3K6K9K12K15K1519615188151501. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While Writingcbs400K800K1200K1600K2000K1656909164725216061381. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write Randombcs300K600K900K1200K1500K1234760123303212215431. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread


Phoronix Test Suite v10.8.5