litert-onednn-xnnpack
2 x AMD EPYC 9575F 64-Core testing with a AMD VOLCANO (RVOT1000D BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2410169-NE-LITERTONE51&grr&rdt.
LiteRT
Model: Inception V4
LiteRT
Model: Mobilenet Float
LiteRT
Model: DeepLab V3
LiteRT
Model: NASNet Mobile
LiteRT
Model: Mobilenet Quant
LiteRT
Model: Inception ResNet V2
LiteRT
Model: SqueezeNet
XNNPACK
Model: QS8MobileNetV2
XNNPACK
Model: FP16MobileNetV3Small
XNNPACK
Model: FP16MobileNetV3Large
XNNPACK
Model: FP16MobileNetV2
XNNPACK
Model: FP16MobileNetV1
XNNPACK
Model: FP32MobileNetV3Small
XNNPACK
Model: FP32MobileNetV3Large
XNNPACK
Model: FP32MobileNetV2
XNNPACK
Model: FP32MobileNetV1
oneDNN
Harness: Recurrent Neural Network Training - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Engine: CPU
LiteRT
Model: Quantized COCO SSD MobileNet v1
oneDNN
Harness: Deconvolution Batch shapes_1d - Engine: CPU
oneDNN
Harness: IP Shapes 1D - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Engine: CPU
Phoronix Test Suite v10.8.5