rt ml lunar lalke Intel Core Ultra 7 256V testing with a ASUS Zenbook S 14 UX5406SA_UX5406SA UX5406SA v1.0 (UX5406SA.300 BIOS) and ASUS Intel LNL 7GB on Ubuntu 24.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2410150-NE-RTMLLUNAR57&gru&sor .
rt ml lunar lalke Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL OpenCL Compiler File-System Screen Resolution a b c d Intel Core Ultra 7 256V @ 4.70GHz (8 Cores) ASUS Zenbook S 14 UX5406SA_UX5406SA UX5406SA v1.0 (UX5406SA.300 BIOS) Intel Device a87f 8 x 2GB LPDDR5-8533MT/s Samsung 1024GB Western Digital WD PC SN560 SDDPNQE-1T00-1102 ASUS Intel LNL 7GB Intel Lunar Lake-M HD Audio Intel Device a840 Ubuntu 24.10 6.11.0-8-generic (x86_64) GNOME Shell 47.0 X Server + Wayland 4.6 Mesa 24.3~git2410050600.39e301~oibaf~o (git-39e3015 2024-10-05 oracular-oibaf-pp OpenCL 3.0 GCC 14.2.0 ext4 2880x1800 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2,rust --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x114 - Thermald 2.5.8 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
rt ml lunar lalke litert: DeepLab V3 litert: SqueezeNet litert: Inception V4 litert: NASNet Mobile litert: Mobilenet Float litert: Mobilenet Quant litert: Inception ResNet V2 litert: Quantized COCO SSD MobileNet v1 onednn: IP Shapes 1D - CPU onednn: IP Shapes 3D - CPU onednn: Convolution Batch Shapes Auto - CPU onednn: Deconvolution Batch shapes_1d - CPU onednn: Deconvolution Batch shapes_3d - CPU onednn: Recurrent Neural Network Training - CPU onednn: Recurrent Neural Network Inference - CPU xnnpack: FP32MobileNetV1 xnnpack: FP32MobileNetV2 xnnpack: FP32MobileNetV3Large xnnpack: FP32MobileNetV3Small xnnpack: FP16MobileNetV1 xnnpack: FP16MobileNetV2 xnnpack: FP16MobileNetV3Large xnnpack: FP16MobileNetV3Small xnnpack: QS8MobileNetV2 a b c d 3692.05 4065.69 61928.8 8215.58 3368.65 4311.17 56566.9 5751.87 28.6413 7.65143 13.0634 7.56465 81.3826 47469.3 38654.1 4568 2756 2710 953 3944 2514 2303 891 1431 3732.92 4071.32 61615 8230.73 3382.83 4304.33 56700.7 5780.23 31.1777 7.62038 11.1984 7.47775 37.519 40646.2 20543.4 4669 2701 2623 946 3996 2554 2304 899 1435 3723.37 4082.01 61797.7 8157.02 3418.29 4354.91 56712.2 5767.25 27.7949 7.61504 11.6274 7.50631 45.0519 46000.7 24385.1 4522 2747 2657 944 3981 2552 2295 892 1417 3730.47 4063.38 61788.8 8232.79 3353.43 4327.05 56752.5 5675.1 134.526 7.64038 11.0063 7.46605 74.9494 59465.3 22618.7 4547 2763 2565 963 3928 2541 2285 888 1418 OpenBenchmarking.org
LiteRT Model: DeepLab V3 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: DeepLab V3 a c d b 800 1600 2400 3200 4000 3692.05 3723.37 3730.47 3732.92
LiteRT Model: SqueezeNet OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: SqueezeNet d a b c 900 1800 2700 3600 4500 4063.38 4065.69 4071.32 4082.01
LiteRT Model: Inception V4 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Inception V4 b d c a 13K 26K 39K 52K 65K 61615.0 61788.8 61797.7 61928.8
LiteRT Model: NASNet Mobile OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: NASNet Mobile c a b d 2K 4K 6K 8K 10K 8157.02 8215.58 8230.73 8232.79
LiteRT Model: Mobilenet Float OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Mobilenet Float d a b c 700 1400 2100 2800 3500 3353.43 3368.65 3382.83 3418.29
LiteRT Model: Mobilenet Quant OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Mobilenet Quant b a d c 900 1800 2700 3600 4500 4304.33 4311.17 4327.05 4354.91
LiteRT Model: Inception ResNet V2 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Inception ResNet V2 a b c d 12K 24K 36K 48K 60K 56566.9 56700.7 56712.2 56752.5
LiteRT Model: Quantized COCO SSD MobileNet v1 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Quantized COCO SSD MobileNet v1 d a c b 1200 2400 3600 4800 6000 5675.10 5751.87 5767.25 5780.23
oneDNN Harness: IP Shapes 1D - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: IP Shapes 1D - Engine: CPU c a b d 30 60 90 120 150 27.79 28.64 31.18 134.53 MIN: 9.27 MIN: 9.25 MIN: 9.31 MIN: 9.47 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: IP Shapes 3D - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: IP Shapes 3D - Engine: CPU c b d a 2 4 6 8 10 7.61504 7.62038 7.64038 7.65143 MIN: 7.29 MIN: 7.21 MIN: 7.42 MIN: 7.29 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Convolution Batch Shapes Auto - Engine: CPU d b c a 3 6 9 12 15 11.01 11.20 11.63 13.06 MIN: 6.5 MIN: 6.75 MIN: 6.71 MIN: 6.74 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Deconvolution Batch shapes_1d - Engine: CPU d b c a 2 4 6 8 10 7.46605 7.47775 7.50631 7.56465 MIN: 7.03 MIN: 6.92 MIN: 6.98 MIN: 6.9 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Deconvolution Batch shapes_3d - Engine: CPU b c d a 20 40 60 80 100 37.52 45.05 74.95 81.38 MIN: 9.23 MIN: 9.32 MIN: 9.26 MIN: 9.43 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Recurrent Neural Network Training - Engine: CPU b c a d 13K 26K 39K 52K 65K 40646.2 46000.7 47469.3 59465.3 MIN: 18915.5 MIN: 23528.6 MIN: 30651.9 MIN: 31857.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Recurrent Neural Network Inference - Engine: CPU b d c a 8K 16K 24K 32K 40K 20543.4 22618.7 24385.1 38654.1 MIN: 10317.1 MIN: 10970.2 MIN: 15657.9 MIN: 25281.8 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
XNNPACK Model: FP32MobileNetV1 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV1 c d a b 1000 2000 3000 4000 5000 4522 4547 4568 4669 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV2 b c a d 600 1200 1800 2400 3000 2701 2747 2756 2763 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV3Large d b c a 600 1200 1800 2400 3000 2565 2623 2657 2710 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV3Small c b a d 200 400 600 800 1000 944 946 953 963 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV1 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV1 d a c b 900 1800 2700 3600 4500 3928 3944 3981 3996 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV2 a d c b 500 1000 1500 2000 2500 2514 2541 2552 2554 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV3Large d c a b 500 1000 1500 2000 2500 2285 2295 2303 2304 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV3Small d a c b 200 400 600 800 1000 888 891 892 899 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QS8MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: QS8MobileNetV2 c d a b 300 600 900 1200 1500 1417 1418 1431 1435 1. (CXX) g++ options: -O3 -lrt -lm
Phoronix Test Suite v10.8.5