Tests
Suites
Latest Results
Search
Register
Login
Popular Tests
Flexible IO Tester
Timed Linux Kernel Compilation
Llama.cpp
Blender
Hashcat
PostgreSQL
Newest Tests
OpenVINO GenAI
Rustls
Recently Updated Tests
Basis Universal
NeatBench
OpenRadioss
QuantLib
GROMACS
AOM AV1
New & Recently Updated Tests
Recently Updated Suites
Machine Learning
Server Motherboard
HPC - High Performance Computing
New & Recently Updated Suites
Component Benchmarks
CPUs / Processors
GPUs / Graphics
OpenGL
Disks / Storage
Motherboards
File-Systems
Operating Systems
OpenBenchmarking.org
Corporate / Organization Info
Bug Reports / Feature Requests
NCNN 1.4.0
pts/ncnn-1.4.0
- 13 August 2022 -
Update against NCNN 20220729 upstream.
downloads.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <Downloads> <Package> <URL>https://github.com/Tencent/ncnn/archive/refs/tags/20220729.tar.gz</URL> <MD5>b35ce9dbe59ad303ee713a8fd42f3c4c</MD5> <SHA256>fa337dce2db3aea82749633322c3572490a86d6f2e144e53aba03480f651991f</SHA256> <FileName>ncnn-20220729.tar.gz</FileName> <FileSize>12235306</FileSize> </Package> <Package> <URL>https://github.com/KhronosGroup/glslang/archive/refs/tags/11.11.0.tar.gz</URL> <MD5>49a6305dfa87d5091eefac1ff492cfc1</MD5> <SHA256>26c216c3062512c018cbdd752224b8dad703b7e5bb90bf338ba2dbb5d4f11438</SHA256> <FileName>glslang-11.11.0.tar.gz</FileName> <FileSize>3542123</FileSize> </Package> </Downloads> </PhoronixTestSuite>
install.sh
#!/bin/sh tar -xf ncnn-20220729.tar.gz tar -xf glslang-11.11.0.tar.gz cp -va glslang-11.11.0/* ncnn-20220729/glslang/ cd ncnn-20220729 # remove int8 tests sed -i -e "/benchmark(\".*_int8\"/d" benchmark/benchncnn.cpp mkdir build cd build cmake -DNCNN_VULKAN=ON -DNCNN_BUILD_TOOLS=OFF -DNCNN_BUILD_EXAMPLES=OFF .. is_cmake_ok=$? if [ $is_cmake_ok -ne 0 ]; then # try to build cpu-only test on system without vulkan development files cmake -DNCNN_VULKAN=OFF -DNCNN_BUILD_TOOLS=OFF -DNCNN_BUILD_EXAMPLES=OFF .. fi make -j $NUM_CPU_CORES echo $? > ~/install-exit-status cp ../benchmark/*.param benchmark/ cd ~/ cat>ncnn<<EOT #!/bin/sh cd ncnn-20220729/build/benchmark ./benchncnn 250 \$NUM_CPU_CORES 0 \$@ 0 > \$LOG_FILE 2>&1 echo \$? > ~/test-exit-status EOT chmod +x ncnn
results-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <ResultsParser> <OutputTemplate> mobilenet min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mobilenet </LineHint> <AppendToArgumentsDescription>Model: mobilenet</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> mobilenet_v2 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mobilenet_v2 </LineHint> <AppendToArgumentsDescription>Model: mobilenet-v2</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> mobilenet_v3 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mobilenet_v3 </LineHint> <AppendToArgumentsDescription>Model: mobilenet-v3</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> shufflenet_v2 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> shufflenet_v2 </LineHint> <AppendToArgumentsDescription>Model: shufflenet-v2</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> mnasnet min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mnasnet </LineHint> <AppendToArgumentsDescription>Model: mnasnet</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> efficientnet_b0 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> efficientnet_b0 </LineHint> <AppendToArgumentsDescription>Model: efficientnet-b0</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> blazeface min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> blazeface </LineHint> <AppendToArgumentsDescription>Model: blazeface</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> googlenet min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> googlenet </LineHint> <AppendToArgumentsDescription>Model: googlenet</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> vgg16 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> vgg16 </LineHint> <AppendToArgumentsDescription>Model: vgg16</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> resnet18 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> resnet18 </LineHint> <AppendToArgumentsDescription>Model: resnet18</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> alexnet min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> alexnet </LineHint> <AppendToArgumentsDescription>Model: alexnet</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> resnet50 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> resnet50 </LineHint> <AppendToArgumentsDescription>Model: resnet50</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> mobilenetv2_yolov3 min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> mobilenetv2_yolov3 </LineHint> <AppendToArgumentsDescription>Model: mobilenetv2-yolov3</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> yolov4-tiny min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> yolov4-tiny </LineHint> <AppendToArgumentsDescription>Model: yolov4-tiny</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> squeezenet_ssd min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> squeezenet_ssd </LineHint> <AppendToArgumentsDescription>Model: squeezenet_ssd</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> regnety_400m min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> regnety_400m </LineHint> <AppendToArgumentsDescription>Model: regnety_400m</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> vision_transformer min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> vision_transformer </LineHint> <AppendToArgumentsDescription>Model: vision_transformer</AppendToArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate> FastestDet min = #_MIN_RESULT_# max = #_MAX_RESULT_# avg = #_RESULT_#</OutputTemplate> <LineHint> FastestDet </LineHint> <AppendToArgumentsDescription>Model: FastestDet</AppendToArgumentsDescription> </ResultsParser> </PhoronixTestSuite>
test-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <TestInformation> <Title>NCNN</Title> <AppVersion>20220729</AppVersion> <Description>NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent.</Description> <ResultScale>ms</ResultScale> <Proportion>LIB</Proportion> <TimesToRun>3</TimesToRun> </TestInformation> <TestProfile> <Version>1.4.0</Version> <SupportedPlatforms>Linux, MacOSX</SupportedPlatforms> <SoftwareType>Scientific</SoftwareType> <TestType>System</TestType> <License>Free</License> <Status>Verified</Status> <ExternalDependencies>cmake, build-utilities, vulkan-development</ExternalDependencies> <EnvironmentSize>196</EnvironmentSize> <ProjectURL>https://github.com/Tencent/ncnn/</ProjectURL> <RepositoryURL>https://github.com/Tencent/ncnn</RepositoryURL> <Maintainer>Michael Larabel</Maintainer> <SystemDependencies>glslang/Include/Common.h, glslangValidator</SystemDependencies> </TestProfile> <TestSettings> <Option> <DisplayName>Target</DisplayName> <Identifier>target</Identifier> <Menu> <Entry> <Name>CPU</Name> <Value>-1</Value> </Entry> <Entry> <Name>Vulkan GPU</Name> <Value>0</Value> </Entry> </Menu> </Option> </TestSettings> </PhoronixTestSuite>