Tests
Suites
Latest Results
Search
Register
Login
Popular Tests
Timed Linux Kernel Compilation
Blender
SVT-AV1
FFmpeg
dav1d
7-Zip Compression
Newest Tests
OpenVINO GenAI
Rustls
LiteRT
WarpX
Epoch
Valkey
Recently Updated Tests
Llama.cpp
OpenVINO
Renaissance
Blender
vkpeak
ProjectPhysX OpenCL-Benchmark
New & Recently Updated Tests
Recently Updated Suites
Machine Learning
Server Motherboard
HPC - High Performance Computing
New & Recently Updated Suites
Component Benchmarks
CPUs / Processors
GPUs / Graphics
OpenGL
Disks / Storage
Motherboards
File-Systems
Operating Systems
OpenBenchmarking.org
Corporate / Organization Info
Bug Reports / Feature Requests
Neural Magic DeepSparse 1.0.1
pts/deepsparse-1.0.1
- 13 October 2022 -
Initial commit of DeepSparse benchmark.
install.sh
#!/bin/sh rm -rf ~/.cache/ pip3 install --user deepsparse==1.1 sparsezoo==1.1 ~/.local/bin/sparsezoo.download zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none ~/.local/bin/sparsezoo.download zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none ~/.local/bin/sparsezoo.download zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none ~/.local/bin/sparsezoo.download zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none ~/.local/bin/sparsezoo.download zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none ~/.local/bin/sparsezoo.download zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none ~/.local/bin/sparsezoo.download zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none echo $? > ~/install-exit-status echo "#!/bin/sh ~/.local/bin/deepsparse.benchmark \$@ > \$LOG_FILE 2>&1 echo \$? > ~/test-exit-status" > deepsparse chmod +x deepsparse
results-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <ResultsParser> <OutputTemplate>Throughput (items/sec): #_RESULT_#</OutputTemplate> <ResultScale>items/sec</ResultScale> <ResultProportion>HIB</ResultProportion> </ResultsParser> <ResultsParser> <OutputTemplate>Latency Mean (ms/batch): #_RESULT_#</OutputTemplate> <ResultScale>ms/batch</ResultScale> <ResultProportion>LIB</ResultProportion> </ResultsParser> </PhoronixTestSuite>
test-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <TestInformation> <Title>Neural Magic DeepSparse</Title> <AppVersion>1.1</AppVersion> <Description>This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/).</Description> <ResultScale>items/sec</ResultScale> <Proportion>HIB</Proportion> <TimesToRun>3</TimesToRun> </TestInformation> <TestProfile> <Version>1.0.1</Version> <SupportedPlatforms>Linux</SupportedPlatforms> <SoftwareType>Benchmark</SoftwareType> <TestType>System</TestType> <License>Free</License> <Status>Verified</Status> <ExternalDependencies>python</ExternalDependencies> <InstallRequiresInternet>TRUE</InstallRequiresInternet> <EnvironmentSize>7100</EnvironmentSize> <ProjectURL>https://neuralmagic.com/deepsparse-engine/</ProjectURL> <RepositoryURL>https://github.com/neuralmagic/deepsparse</RepositoryURL> <Maintainer>Michael Larabel</Maintainer> <SystemDependencies>pip3</SystemDependencies> </TestProfile> <TestSettings> <Default> <PostArguments> -t 30 -w 5</PostArguments> </Default> <Option> <DisplayName>Model</DisplayName> <Identifier>model</Identifier> <Menu> <Entry> <Name>NLP Text Classification, BERT base uncased SST2</Name> <Value>zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none</Value> </Entry> <Entry> <Name>NLP Text Classification, DistilBERT mnli</Name> <Value>zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none</Value> </Entry> <Entry> <Name>CV Classification, ResNet-50 ImageNet</Name> <Value>zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none</Value> </Entry> <Entry> <Name>NLP Token Classification, BERT base uncased conll2003</Name> <Value>zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none</Value> </Entry> <Entry> <Name>NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90</Name> <Value>zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none</Value> </Entry> <Entry> <Name>CV Detection,YOLOv5s COCO</Name> <Value>zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none</Value> </Entry> <Entry> <Name>NLP Document Classification, oBERT base uncased on IMDB</Name> <Value>zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none</Value> </Entry> </Menu> </Option> <Option> <DisplayName>Scenario</DisplayName> <Identifier>scenario</Identifier> <ArgumentPrefix>--scenario </ArgumentPrefix> <Menu> <Entry> <Name>Synchronous Single-Stream</Name> <Value>sync</Value> </Entry> <Entry> <Name>Asynchronous Multi-Stream</Name> <Value>async</Value> </Entry> </Menu> </Option> </TestSettings> </PhoronixTestSuite>