dsdfds
Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1662 BIOS) and ASUS Intel RPL-S 16GB on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2410191-PTS-DSDFDS2287&sor&grs.
LiteRT
Model: NASNet Mobile
LiteRT
Model: DeepLab V3
XNNPACK
Model: FP32MobileNetV1
XNNPACK
Model: FP16MobileNetV1
Mobile Neural Network
Model: squeezenetv1.1
LiteRT
Model: Mobilenet Quant
Mobile Neural Network
Model: mobilenetV3
LiteRT
Model: Inception ResNet V2
Mobile Neural Network
Model: MobileNetV2_224
XNNPACK
Model: FP16MobileNetV3Large
NAMD
Input: ATPase with 327,506 Atoms
LiteRT
Model: Quantized COCO SSD MobileNet v1
Apache Cassandra
Test: Writes
BYTE Unix Benchmark
Computational Test: Whetstone Double
oneDNN
Harness: Recurrent Neural Network Inference - Engine: CPU
XNNPACK
Model: FP32MobileNetV3Large
XNNPACK
Model: FP32MobileNetV2
XNNPACK
Model: FP32MobileNetV3Small
XNNPACK
Model: FP16MobileNetV3Small
Mobile Neural Network
Model: resnet-v2-50
LiteRT
Model: Inception V4
XNNPACK
Model: QS8MobileNetV2
Build2
Time To Compile
BYTE Unix Benchmark
Computational Test: Dhrystone 2
Mobile Neural Network
Model: SqueezeNetV1.0
LiteRT
Model: SqueezeNet
Mobile Neural Network
Model: nasnet
LiteRT
Model: Mobilenet Float
NAMD
Input: STMV with 1,066,628 Atoms
BYTE Unix Benchmark
Computational Test: Pipe
oneDNN
Harness: IP Shapes 1D - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Engine: CPU
Mobile Neural Network
Model: inception-v3
oneDNN
Harness: Deconvolution Batch shapes_1d - Engine: CPU
BYTE Unix Benchmark
Computational Test: System Call
oneDNN
Harness: IP Shapes 3D - Engine: CPU
XNNPACK
Model: FP16MobileNetV2
oneDNN
Harness: Convolution Batch Shapes Auto - Engine: CPU
Mobile Neural Network
Model: mobilenet-v1-1.0
oneDNN
Harness: Recurrent Neural Network Training - Engine: CPU
Phoronix Test Suite v10.8.5