Tests
Suites
Latest Results
Search
Register
Login
Popular Tests
Timed Linux Kernel Compilation
SVT-AV1
7-Zip Compression
Stockfish
FFmpeg
x265
Newest Tests
Rustls
LiteRT
WarpX
Epoch
Valkey
Whisperfile
Recently Updated Tests
Mobile Neural Network
ACES DGEMM
NWChem
SuperTuxKart
ASTC Encoder
SVT-AV1
New & Recently Updated Tests
Recently Updated Suites
Database Test Suite
Machine Learning
Steam
New & Recently Updated Suites
Component Benchmarks
CPUs / Processors
GPUs / Graphics
OpenGL
Disks / Storage
Motherboards
File-Systems
Operating Systems
OpenBenchmarking.org
Corporate / Organization Info
Bug Reports / Feature Requests
XNNPACK 1.0.0
pts/xnnpack-1.0.0
- 11 August 2024 -
Add XNNPACK benchmark.
downloads.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.5--> <PhoronixTestSuite> <Downloads> <Package> <URL>https://github.com/google/XNNPACK/archive/2cd86b37c0be1a433a1fbc9433eef2c826e250dd.zip</URL> <MD5>e3f0bf71ef8a9b3807a6fa87a17353f4</MD5> <SHA256>3fcf3975875e340c8675f3fa9e98ddc1a85eadbb65b3b4547f5e51aaec389210</SHA256> <FileName>XNNPACK-2cd86b37c0be1a433a1fbc9433eef2c826e250dd.zip</FileName> <FileSize>27554093</FileSize> </Package> </Downloads> </PhoronixTestSuite>
install.sh
#!/bin/bash rm -rf XNNPACK-2cd86b37c0be1a433a1fbc9433eef2c826e250dd unzip -o XNNPACK-2cd86b37c0be1a433a1fbc9433eef2c826e250dd.zip cd XNNPACK-2cd86b37c0be1a433a1fbc9433eef2c826e250dd mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=Release make -j $NUM_CPU_CORES echo $? > ~/install-exit-status cd ~/ cat>xnnpack<<EOT #!/bin/sh cd XNNPACK-2cd86b37c0be1a433a1fbc9433eef2c826e250dd/build ./end2end-bench --benchmark_min_time=8 --benchmark_min_warmup_time=2 --benchmark_filter="\$NUM_CPU_CORES/real_time" > \$LOG_FILE 2>&1 echo \$? > ~/test-exit-status EOT chmod +x xnnpack
results-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.5--> <PhoronixTestSuite> <ResultsParser> <OutputTemplate>FP32MobileNetV2/T:32/real_time #_RESULT_# us 1422 us 7966 cpufreq=5.46597G</OutputTemplate> <LineHint>FP32MobileNetV2</LineHint> <ArgumentsDescription>Model: FP32MobileNetV2</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>FP32MobileNetV3Large/T:32/real_time #_RESULT_# us 1694 us 6390 cpufreq=5.48528G</OutputTemplate> <LineHint>FP32MobileNetV3Large</LineHint> <ArgumentsDescription>Model: FP32MobileNetV3Large</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>FP32MobileNetV3Small/T:32/real_time #_RESULT_# us 867 us 12836 cpufreq=5.52173G</OutputTemplate> <LineHint>FP32MobileNetV3Small</LineHint> <ArgumentsDescription>Model: FP32MobileNetV3Small</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>FP16MobileNetV2/T:32/real_time #_RESULT_# us 1422 us 7966 cpufreq=5.46597G</OutputTemplate> <LineHint>FP16MobileNetV2</LineHint> <ArgumentsDescription>Model: FP16MobileNetV2</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>FP16MobileNetV3Large/T:32/real_time #_RESULT_# us 1694 us 6390 cpufreq=5.48528G</OutputTemplate> <LineHint>FP16MobileNetV3Large</LineHint> <ArgumentsDescription>Model: FP16MobileNetV3Large</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>FP16MobileNetV3Small/T:32/real_time #_RESULT_# us 867 us 12836 cpufreq=5.52173G</OutputTemplate> <LineHint>FP16MobileNetV3Small</LineHint> <ArgumentsDescription>Model: FP16MobileNetV3Small</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>QU8MobileNetV2/T:32/real_time #_RESULT_# us 1422 us 7966 cpufreq=5.46597G</OutputTemplate> <LineHint>QU8MobileNetV2</LineHint> <ArgumentsDescription>Model: QU8MobileNetV2</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>QU8MobileNetV3Large/T:32/real_time #_RESULT_# us 1694 us 6390 cpufreq=5.48528G</OutputTemplate> <LineHint>QU8MobileNetV3Large</LineHint> <ArgumentsDescription>Model: QU8MobileNetV3Large</ArgumentsDescription> </ResultsParser> <ResultsParser> <OutputTemplate>QU8MobileNetV3Small/T:32/real_time #_RESULT_# us 867 us 12836 cpufreq=5.52173G</OutputTemplate> <LineHint>QU8MobileNetV3Small</LineHint> <ArgumentsDescription>Model: QU8MobileNetV3Small</ArgumentsDescription> </ResultsParser> </PhoronixTestSuite>
test-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.5--> <PhoronixTestSuite> <TestInformation> <Title>XNNPACK</Title> <AppVersion>2cd86b</AppVersion> <Description>XNNPACK is a Google library for high efficiency floating-point neural network inference operators across mobile / server / web use. XNNPACK is used by machine learning frameworks like TensorFlow, PyTorch, ONNX Runtime, MediaPipe, and others. This test profile uses XNNPACK with its end2end-bench benchmark and testing all available CPU threads.</Description> <ResultScale>us</ResultScale> <Proportion>LIB</Proportion> <TimesToRun>3</TimesToRun> </TestInformation> <TestProfile> <Version>1.0.0</Version> <SupportedPlatforms>Linux</SupportedPlatforms> <SoftwareType>Scientific</SoftwareType> <TestType>System</TestType> <License>Free</License> <Status>Verified</Status> <ExternalDependencies>cmake, build-utilities</ExternalDependencies> <EnvironmentSize>6200</EnvironmentSize> <ProjectURL>https://github.com/google/XNNPACK/</ProjectURL> <RepositoryURL>https://github.com/google/XNNPACK</RepositoryURL> <Maintainer>Michael Larabel</Maintainer> </TestProfile> </PhoronixTestSuite>