rt ml lunar lalke Intel Core Ultra 7 256V testing with a ASUS Zenbook S 14 UX5406SA_UX5406SA UX5406SA v1.0 (UX5406SA.300 BIOS) and ASUS Intel LNL 7GB on Ubuntu 24.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2410150-NE-RTMLLUNAR57&sro&grs .
rt ml lunar lalke Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL OpenCL Compiler File-System Screen Resolution a b c d Intel Core Ultra 7 256V @ 4.70GHz (8 Cores) ASUS Zenbook S 14 UX5406SA_UX5406SA UX5406SA v1.0 (UX5406SA.300 BIOS) Intel Device a87f 8 x 2GB LPDDR5-8533MT/s Samsung 1024GB Western Digital WD PC SN560 SDDPNQE-1T00-1102 ASUS Intel LNL 7GB Intel Lunar Lake-M HD Audio Intel Device a840 Ubuntu 24.10 6.11.0-8-generic (x86_64) GNOME Shell 47.0 X Server + Wayland 4.6 Mesa 24.3~git2410050600.39e301~oibaf~o (git-39e3015 2024-10-05 oracular-oibaf-pp OpenCL 3.0 GCC 14.2.0 ext4 2880x1800 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2,rust --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-14-zdkDXv/gcc-14-14.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x114 - Thermald 2.5.8 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
rt ml lunar lalke onednn: IP Shapes 1D - CPU onednn: Deconvolution Batch shapes_3d - CPU onednn: Recurrent Neural Network Inference - CPU onednn: Recurrent Neural Network Training - CPU onednn: Convolution Batch Shapes Auto - CPU xnnpack: FP32MobileNetV3Large xnnpack: FP32MobileNetV1 xnnpack: FP32MobileNetV2 xnnpack: FP32MobileNetV3Small litert: Mobilenet Float litert: Quantized COCO SSD MobileNet v1 xnnpack: FP16MobileNetV1 xnnpack: FP16MobileNetV2 onednn: Deconvolution Batch shapes_1d - CPU xnnpack: QS8MobileNetV2 xnnpack: FP16MobileNetV3Small litert: Mobilenet Quant litert: DeepLab V3 litert: NASNet Mobile xnnpack: FP16MobileNetV3Large litert: Inception V4 onednn: IP Shapes 3D - CPU litert: SqueezeNet litert: Inception ResNet V2 a b c d 28.6413 81.3826 38654.1 47469.3 13.0634 2710 4568 2756 953 3368.65 5751.87 3944 2514 7.56465 1431 891 4311.17 3692.05 8215.58 2303 61928.8 7.65143 4065.69 56566.9 31.1777 37.519 20543.4 40646.2 11.1984 2623 4669 2701 946 3382.83 5780.23 3996 2554 7.47775 1435 899 4304.33 3732.92 8230.73 2304 61615 7.62038 4071.32 56700.7 27.7949 45.0519 24385.1 46000.7 11.6274 2657 4522 2747 944 3418.29 5767.25 3981 2552 7.50631 1417 892 4354.91 3723.37 8157.02 2295 61797.7 7.61504 4082.01 56712.2 134.526 74.9494 22618.7 59465.3 11.0063 2565 4547 2763 963 3353.43 5675.1 3928 2541 7.46605 1418 888 4327.05 3730.47 8232.79 2285 61788.8 7.64038 4063.38 56752.5 OpenBenchmarking.org
oneDNN Harness: IP Shapes 1D - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: IP Shapes 1D - Engine: CPU a b c d 30 60 90 120 150 28.64 31.18 27.79 134.53 MIN: 9.25 MIN: 9.31 MIN: 9.27 MIN: 9.47 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Deconvolution Batch shapes_3d - Engine: CPU a b c d 20 40 60 80 100 81.38 37.52 45.05 74.95 MIN: 9.43 MIN: 9.23 MIN: 9.32 MIN: 9.26 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Recurrent Neural Network Inference - Engine: CPU a b c d 8K 16K 24K 32K 40K 38654.1 20543.4 24385.1 22618.7 MIN: 25281.8 MIN: 10317.1 MIN: 15657.9 MIN: 10970.2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Recurrent Neural Network Training - Engine: CPU a b c d 13K 26K 39K 52K 65K 47469.3 40646.2 46000.7 59465.3 MIN: 30651.9 MIN: 18915.5 MIN: 23528.6 MIN: 31857.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Convolution Batch Shapes Auto - Engine: CPU a b c d 3 6 9 12 15 13.06 11.20 11.63 11.01 MIN: 6.74 MIN: 6.75 MIN: 6.71 MIN: 6.5 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
XNNPACK Model: FP32MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV3Large a b c d 600 1200 1800 2400 3000 2710 2623 2657 2565 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV1 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV1 a b c d 1000 2000 3000 4000 5000 4568 4669 4522 4547 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV2 a b c d 600 1200 1800 2400 3000 2756 2701 2747 2763 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP32MobileNetV3Small a b c d 200 400 600 800 1000 953 946 944 963 1. (CXX) g++ options: -O3 -lrt -lm
LiteRT Model: Mobilenet Float OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Mobilenet Float a b c d 700 1400 2100 2800 3500 3368.65 3382.83 3418.29 3353.43
LiteRT Model: Quantized COCO SSD MobileNet v1 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Quantized COCO SSD MobileNet v1 a b c d 1200 2400 3600 4800 6000 5751.87 5780.23 5767.25 5675.10
XNNPACK Model: FP16MobileNetV1 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV1 a b c d 900 1800 2700 3600 4500 3944 3996 3981 3928 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV2 a b c d 500 1000 1500 2000 2500 2514 2554 2552 2541 1. (CXX) g++ options: -O3 -lrt -lm
oneDNN Harness: Deconvolution Batch shapes_1d - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: Deconvolution Batch shapes_1d - Engine: CPU a b c d 2 4 6 8 10 7.56465 7.47775 7.50631 7.46605 MIN: 6.9 MIN: 6.92 MIN: 6.98 MIN: 7.03 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
XNNPACK Model: QS8MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: QS8MobileNetV2 a b c d 300 600 900 1200 1500 1431 1435 1417 1418 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV3Small a b c d 200 400 600 800 1000 891 899 892 888 1. (CXX) g++ options: -O3 -lrt -lm
LiteRT Model: Mobilenet Quant OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Mobilenet Quant a b c d 900 1800 2700 3600 4500 4311.17 4304.33 4354.91 4327.05
LiteRT Model: DeepLab V3 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: DeepLab V3 a b c d 800 1600 2400 3200 4000 3692.05 3732.92 3723.37 3730.47
LiteRT Model: NASNet Mobile OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: NASNet Mobile a b c d 2K 4K 6K 8K 10K 8215.58 8230.73 8157.02 8232.79
XNNPACK Model: FP16MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK b7b048 Model: FP16MobileNetV3Large a b c d 500 1000 1500 2000 2500 2303 2304 2295 2285 1. (CXX) g++ options: -O3 -lrt -lm
LiteRT Model: Inception V4 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Inception V4 a b c d 13K 26K 39K 52K 65K 61928.8 61615.0 61797.7 61788.8
oneDNN Harness: IP Shapes 3D - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.6 Harness: IP Shapes 3D - Engine: CPU a b c d 2 4 6 8 10 7.65143 7.62038 7.61504 7.64038 MIN: 7.29 MIN: 7.21 MIN: 7.29 MIN: 7.42 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl
LiteRT Model: SqueezeNet OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: SqueezeNet a b c d 900 1800 2700 3600 4500 4065.69 4071.32 4082.01 4063.38
LiteRT Model: Inception ResNet V2 OpenBenchmarking.org Microseconds, Fewer Is Better LiteRT 2024-10-15 Model: Inception ResNet V2 a b c d 12K 24K 36K 48K 60K 56566.9 56700.7 56712.2 56752.5
Phoronix Test Suite v10.8.5