XNNPACK is a Google library for high efficiency floating-point neural network inference operators across mobile / server / web use. XNNPACK is used by machine learning frameworks like TensorFlow, PyTorch, ONNX Runtime, MediaPipe, and others. This test profile uses XNNPACK with its bench-models benchmark and testing all available CPU threads.
To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark xnnpack.
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly. ** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. Data updated weekly as of 21 December 2024.
Revision History
pts/xnnpack-1.1.0 [View Source] Tue, 15 Oct 2024 15:30:38 GMT Update against XNNPACK upstream, switch to new benchmark.
OpenBenchmarking.org metrics for this test profile configuration based on 194 public results since 15 October 2024 with the latest data as of 22 December 2024.
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Based on OpenBenchmarking.org data, the selected test / test configuration (XNNPACK b7b048 - Model: QS8MobileNetV2) has an average run-time of 17 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.7%.
Tested CPU Architectures
This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.