Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions.


Mobile Neural Network 3.0

Model: MobileNetV2_224

OpenBenchmarking.org metrics for this test profile configuration based on 36 public results since 18 November 2024 with the latest data as of 19 November 2024.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Details
Percentile Rank
# Compatible Public Results
ms (Average)
98th
4
1.013 +/- 0.005
Zen 5 [8 Cores / 16 Threads]
84th
5
1.112 +/- 0.017
Mid-Tier
75th
> 1.903
70th
3
1.919 +/- 0.017
Zen 5 [96 Cores / 192 Threads]
62nd
3
2.880 +/- 0.027
Zen 4 [8 Cores / 16 Threads]
53rd
3
3.422 +/- 0.072
Median
50th
3.468
Zen 4 [64 Cores / 128 Threads]
48th
3
3.470 +/- 0.050
Lunar Lake [8 Cores / 8 Threads]
42nd
4
3.500 +/- 0.022
Low-Tier
25th
> 4.115
Zen 5 [10 Cores / 20 Threads]
23rd
4
4.116 +/- 0.049
Meteor Lake [16 Cores / 22 Threads]
17th
3
4.564 +/- 0.127
Zen 5 [12 Cores / 24 Threads]
3rd
4
4.998 +/- 0.123