deepspaarse 17 AMD Ryzen 9 7950X 16-Core testing with a ASUS ROG STRIX X670E-E GAMING WIFI (1905 BIOS) and NVIDIA GeForce RTX 3080 10GB on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2403151-PTS-DEEPSPAA58&grr&rdt .
deepspaarse 17 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server Display Driver OpenGL OpenCL Compiler File-System Screen Resolution a b c d e AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads) ASUS ROG STRIX X670E-E GAMING WIFI (1905 BIOS) AMD Device 14d8 2 x 16GB DRAM-6000MT/s G Skill F5-6000J3038F16G 2000GB Samsung SSD 980 PRO 2TB + Western Digital WD_BLACK SN850X 2000GB NVIDIA GeForce RTX 3080 10GB NVIDIA GA102 HD Audio DELL U2723QE Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 Ubuntu 23.10 6.7.0-060700-generic (x86_64) GNOME Shell 45.2 X Server 1.21.1.7 NVIDIA 550.54.14 4.6.0 OpenCL 3.0 CUDA 12.4.89 GCC 13.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Processor Details - Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa601206 Python Details - Python 3.11.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
deepspaarse 17 deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Stream deepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Stream deepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Stream deepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream a b c d e 28.6159 279.4430 1900.4795 4.0959 131.3760 7.6108 3.3206 300.8741 8.5925 929.8526 18.5738 430.4254 8.8521 112.8655 53.7610 18.5981 369.7269 21.5986 371.5702 21.5097 53.1496 18.8124 212.4479 37.6522 31.6588 31.5733 8.4942 117.6643 41.7072 191.7023 5.1149 195.2905 0.6937 1437.5523 3.3518 2379.9513 63.2681 126.3399 10.0959 99.0032 28.4928 280.6164 5.1378 194.4294 28.5076 280.4549 1894.2203 4.1098 131.409 7.6089 3.3168 301.2353 8.6079 928.1657 18.5138 431.8114 8.8696 112.649 53.9298 18.5403 376.3639 21.2505 370.8275 21.4734 53.5945 18.6562 211.7512 37.7759 31.5706 31.6614 8.5902 116.3492 41.53 192.506 5.0976 195.9601 0.6932 1438.271 3.3559 2376.5719 63.1227 126.6763 10.046 99.497 28.5051 280.4277 5.112 195.4038 28.4559 280.98 1905.4156 4.0853 131.4587 7.606 3.3287 300.1138 8.6128 927.609 18.5539 430.8886 8.9004 112.2523 53.7145 18.6145 375.8659 21.2799 374.0775 21.3153 53.494 18.6913 211.819 37.7639 31.5743 31.6577 8.5743 116.5639 41.5244 192.6076 5.1095 195.5005 0.6899 1444.8333 3.3425 2385.769 62.9466 127.0251 10.0546 99.4073 28.458 281.0267 5.088 196.3357 28.5083 280.5104 1908.6078 4.0791 131.5957 7.5982 3.3113 301.7005 8.6817 920.2774 18.5638 430.6637 8.8748 112.5691 53.6024 18.6535 376.5224 21.2445 371.948 21.4523 53.4247 18.7156 212.0904 37.7156 31.5308 31.7017 8.6715 115.2566 41.5819 192.3054 5.0963 196.0237 0.7019 1420.7445 3.3488 2381.2716 63.2608 126.4112 10.046 99.4984 28.5145 280.4486 5.0871 196.3727 28.3070 282.4848 1900.5641 4.0962 131.5494 7.6006 3.3176 301.1689 8.6244 926.3913 18.6294 429.1529 8.8780 112.5344 53.7068 18.6168 374.9653 21.3119 372.4923 21.4247 53.5694 18.6648 212.7103 37.6057 31.6738 31.5591 8.5269 117.2079 41.9183 190.7916 5.1030 195.7473 0.6957 1433.2426 3.3800 2359.6605 63.0966 126.7258 10.0857 99.1040 28.5061 280.5353 5.0981 195.9355 OpenBenchmarking.org
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c d e 7 14 21 28 35 SE +/- 0.04, N = 3 SE +/- 0.08, N = 3 28.62 28.51 28.46 28.51 28.31
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c d e 60 120 180 240 300 SE +/- 0.35, N = 3 SE +/- 0.79, N = 3 279.44 280.45 280.98 280.51 282.48
Neural Magic DeepSparse Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream a b c d e 400 800 1200 1600 2000 SE +/- 0.71, N = 3 SE +/- 1.74, N = 3 1900.48 1894.22 1905.42 1908.61 1900.56
Neural Magic DeepSparse Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream a b c d e 0.9247 1.8494 2.7741 3.6988 4.6235 SE +/- 0.0014, N = 3 SE +/- 0.0043, N = 3 4.0959 4.1098 4.0853 4.0791 4.0962
Neural Magic DeepSparse Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream a b c d e 30 60 90 120 150 SE +/- 0.09, N = 3 SE +/- 0.05, N = 3 131.38 131.41 131.46 131.60 131.55
Neural Magic DeepSparse Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream a b c d e 2 4 6 8 10 SE +/- 0.0050, N = 3 SE +/- 0.0028, N = 3 7.6108 7.6089 7.6060 7.5982 7.6006
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d e 0.749 1.498 2.247 2.996 3.745 SE +/- 0.0086, N = 3 SE +/- 0.0148, N = 3 3.3206 3.3168 3.3287 3.3113 3.3176
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d e 70 140 210 280 350 SE +/- 0.77, N = 3 SE +/- 1.34, N = 3 300.87 301.24 300.11 301.70 301.17
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e 2 4 6 8 10 SE +/- 0.0153, N = 3 SE +/- 0.0104, N = 3 8.5925 8.6079 8.6128 8.6817 8.6244
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e 200 400 600 800 1000 SE +/- 1.67, N = 3 SE +/- 1.13, N = 3 929.85 928.17 927.61 920.28 926.39
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e 5 10 15 20 25 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 18.57 18.51 18.55 18.56 18.63
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e 90 180 270 360 450 SE +/- 0.23, N = 3 SE +/- 0.70, N = 3 430.43 431.81 430.89 430.66 429.15
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d e 2 4 6 8 10 SE +/- 0.0421, N = 3 SE +/- 0.0141, N = 3 8.8521 8.8696 8.9004 8.8748 8.8780
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d e 30 60 90 120 150 SE +/- 0.54, N = 3 SE +/- 0.18, N = 3 112.87 112.65 112.25 112.57 112.53
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b c d e 12 24 36 48 60 SE +/- 0.02, N = 3 SE +/- 0.08, N = 3 53.76 53.93 53.71 53.60 53.71
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b c d e 5 10 15 20 25 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 18.60 18.54 18.61 18.65 18.62
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d e 80 160 240 320 400 SE +/- 1.41, N = 3 SE +/- 0.28, N = 3 369.73 376.36 375.87 376.52 374.97
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d e 5 10 15 20 25 SE +/- 0.10, N = 3 SE +/- 0.03, N = 3 21.60 21.25 21.28 21.24 21.31
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c d e 80 160 240 320 400 SE +/- 1.38, N = 3 SE +/- 1.05, N = 3 371.57 370.83 374.08 371.95 372.49
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c d e 5 10 15 20 25 SE +/- 0.08, N = 3 SE +/- 0.08, N = 3 21.51 21.47 21.32 21.45 21.42
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b c d e 12 24 36 48 60 SE +/- 0.08, N = 3 SE +/- 0.06, N = 3 53.15 53.59 53.49 53.42 53.57
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b c d e 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 18.81 18.66 18.69 18.72 18.66
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c d e 50 100 150 200 250 SE +/- 0.21, N = 3 SE +/- 0.13, N = 3 212.45 211.75 211.82 212.09 212.71
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c d e 9 18 27 36 45 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 37.65 37.78 37.76 37.72 37.61
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b c d e 7 14 21 28 35 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 31.66 31.57 31.57 31.53 31.67
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b c d e 7 14 21 28 35 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 31.57 31.66 31.66 31.70 31.56
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b c d e 2 4 6 8 10 SE +/- 0.0262, N = 3 SE +/- 0.0075, N = 3 8.4942 8.5902 8.5743 8.6715 8.5269
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b c d e 30 60 90 120 150 SE +/- 0.36, N = 3 SE +/- 0.10, N = 3 117.66 116.35 116.56 115.26 117.21
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c d e 10 20 30 40 50 SE +/- 0.02, N = 3 SE +/- 0.09, N = 3 41.71 41.53 41.52 41.58 41.92
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c d e 40 80 120 160 200 SE +/- 0.07, N = 3 SE +/- 0.41, N = 3 191.70 192.51 192.61 192.31 190.79
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream a b c d e 1.1509 2.3018 3.4527 4.6036 5.7545 SE +/- 0.0035, N = 3 SE +/- 0.0121, N = 3 5.1149 5.0976 5.1095 5.0963 5.1030
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream a b c d e 40 80 120 160 200 SE +/- 0.13, N = 3 SE +/- 0.46, N = 3 195.29 195.96 195.50 196.02 195.75
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d e 0.1579 0.3158 0.4737 0.6316 0.7895 SE +/- 0.0010, N = 3 SE +/- 0.0030, N = 3 0.6937 0.6932 0.6899 0.7019 0.6957
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d e 300 600 900 1200 1500 SE +/- 2.00, N = 3 SE +/- 6.10, N = 3 1437.55 1438.27 1444.83 1420.74 1433.24
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e 0.7605 1.521 2.2815 3.042 3.8025 SE +/- 0.0036, N = 3 SE +/- 0.0197, N = 3 3.3518 3.3559 3.3425 3.3488 3.3800
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e 500 1000 1500 2000 2500 SE +/- 2.33, N = 3 SE +/- 13.77, N = 3 2379.95 2376.57 2385.77 2381.27 2359.66
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e 14 28 42 56 70 SE +/- 0.21, N = 3 SE +/- 0.43, N = 3 63.27 63.12 62.95 63.26 63.10
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d e 30 60 90 120 150 SE +/- 0.41, N = 3 SE +/- 0.88, N = 3 126.34 126.68 127.03 126.41 126.73
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d e 3 6 9 12 15 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 10.10 10.05 10.05 10.05 10.09
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d e 20 40 60 80 100 SE +/- 0.04, N = 3 SE +/- 0.14, N = 3 99.00 99.50 99.41 99.50 99.10
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b c d e 7 14 21 28 35 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 28.49 28.51 28.46 28.51 28.51
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b c d e 60 120 180 240 300 SE +/- 0.32, N = 3 SE +/- 0.54, N = 3 280.62 280.43 281.03 280.45 280.54
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b c d e 1.156 2.312 3.468 4.624 5.78 SE +/- 0.0107, N = 3 SE +/- 0.0126, N = 3 5.1378 5.1120 5.0880 5.0871 5.0981
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b c d e 40 80 120 160 200 SE +/- 0.40, N = 3 SE +/- 0.48, N = 3 194.43 195.40 196.34 196.37 195.94
Phoronix Test Suite v10.8.5