tg Tests for a future article. Intel Core i7-1280P testing with a MSI MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 15GB on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2311250-NE-TG983149007&gru .
tg Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL OpenCL Compiler File-System Screen Resolution a b Intel Core i7-1280P @ 4.70GHz (14 Cores / 20 Threads) MSI MS-14C6 (E14C6IMS.115 BIOS) Intel Alder Lake PCH 16GB 1024GB Micron_3400_MTFDKBA1T0TFH MSI Intel ADL GT2 15GB (1450MHz) Realtek ALC274 Intel Alder Lake-P PCH CNVi WiFi Ubuntu 23.10 6.5.0-10-generic (x86_64) GNOME Shell 45.0 X Server + Wayland 4.6 Mesa 23.2.1-1ubuntu3 OpenCL 3.0 GCC 13.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x42c - Thermald 2.5.4 Java Details - OpenJDK Runtime Environment (build 17.0.9-ea+6-Ubuntu-1) Python Details - Python 3.11.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
tg pytorch: CPU - 1 - ResNet-50 pytorch: CPU - 1 - ResNet-152 pytorch: CPU - 16 - ResNet-50 pytorch: CPU - 32 - ResNet-50 pytorch: CPU - 64 - ResNet-50 pytorch: CPU - 16 - ResNet-152 pytorch: CPU - 32 - ResNet-152 pytorch: CPU - 64 - ResNet-152 pytorch: CPU - 1 - Efficientnet_v2_l pytorch: CPU - 16 - Efficientnet_v2_l pytorch: CPU - 32 - Efficientnet_v2_l pytorch: CPU - 64 - Efficientnet_v2_l embree: Pathtracer ISPC - Crown embree: Pathtracer ISPC - Asian Dragon arrayfire: BLAS CPU FP16 arrayfire: BLAS CPU FP32 java-scimark2: Composite java-scimark2: Monte Carlo java-scimark2: Fast Fourier Transform java-scimark2: Sparse Matrix Multiply java-scimark2: Dense LU Matrix Factorization java-scimark2: Jacobi Successive Over-Relaxation webp2: Default webp2: Quality 75, Compression Effort 7 webp2: Quality 95, Compression Effort 7 webp2: Quality 100, Compression Effort 5 webp2: Quality 100, Lossless Compression arrayfire: Conjugate Gradient CPU blender: BMW27 - CPU-Only a b 20.04 9.19 13.77 13.78 13.76 5.44 5.40 5.45 5.52 3.39 3.48 3.23 5.3366 7.143 64.539 394.686 2908.79 1245.28 719.93 3708.47 6529.95 2340.35 7.05 0.09 0.04 3.60 0.01 13.42 233.56 15.47 7.33 10.75 10.77 10.75 4.27 3.84 4.29 4.49 2.52 2.99 2.97 5.3316 7.111 64.2895 106.235 2916.23 1246.36 725.12 3733.82 6531.95 2343.93 7.04 0.09 0.04 3.15 0.01 13.12 308.05 OpenBenchmarking.org
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b 5 10 15 20 25 20.04 15.47 MIN: 17.17 / MAX: 35.57 MIN: 14.08 / MAX: 22.48
PyTorch Device: CPU - Batch Size: 1 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 a b 3 6 9 12 15 9.19 7.33 MIN: 8.48 / MAX: 12.97 MIN: 7.08 / MAX: 11.65
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b 4 8 12 16 20 13.77 10.75 MIN: 12.94 / MAX: 18.53 MIN: 10.17 / MAX: 15.41
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b 4 8 12 16 20 13.78 10.77 MIN: 13.15 / MAX: 18.24 MIN: 10.43 / MAX: 14.76
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-50 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 a b 4 8 12 16 20 13.76 10.75 MIN: 13.09 / MAX: 18.12 MIN: 10.57 / MAX: 15.02
PyTorch Device: CPU - Batch Size: 16 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 a b 1.224 2.448 3.672 4.896 6.12 5.44 4.27 MIN: 5.3 / MAX: 7.11 MIN: 4.17 / MAX: 5.87
PyTorch Device: CPU - Batch Size: 32 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 a b 1.215 2.43 3.645 4.86 6.075 5.40 3.84 MIN: 5.26 / MAX: 7.05 MIN: 3.77 / MAX: 5.24
PyTorch Device: CPU - Batch Size: 64 - Model: ResNet-152 OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 a b 1.2263 2.4526 3.6789 4.9052 6.1315 5.45 4.29 MIN: 5.17 / MAX: 7.1 MIN: 4.22 / MAX: 5.81
PyTorch Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l a b 1.242 2.484 3.726 4.968 6.21 5.52 4.49 MIN: 5.16 / MAX: 8.36 MIN: 4.05 / MAX: 6.6
PyTorch Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l a b 0.7628 1.5256 2.2884 3.0512 3.814 3.39 2.52 MIN: 3.16 / MAX: 5.04 MIN: 2.46 / MAX: 3.36
PyTorch Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l a b 0.783 1.566 2.349 3.132 3.915 3.48 2.99 MIN: 3.2 / MAX: 4.03 MIN: 2.89 / MAX: 4.23
PyTorch Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l a b 0.7268 1.4536 2.1804 2.9072 3.634 3.23 2.97 MIN: 3.15 / MAX: 4.09 MIN: 2.92 / MAX: 4.19
Embree Binary: Pathtracer ISPC - Model: Crown OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown a b 1.2007 2.4014 3.6021 4.8028 6.0035 5.3366 5.3316 MIN: 5.2 / MAX: 5.49 MIN: 5.2 / MAX: 5.49
Embree Binary: Pathtracer ISPC - Model: Asian Dragon OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon a b 2 4 6 8 10 7.143 7.111 MIN: 7.04 / MAX: 7.25 MIN: 6.99 / MAX: 7.23
ArrayFire Test: BLAS CPU FP16 OpenBenchmarking.org GFLOPS, More Is Better ArrayFire 3.9 Test: BLAS CPU FP16 a b 14 28 42 56 70 64.54 64.29 1. (CXX) g++ options: -O3
ArrayFire Test: BLAS CPU FP32 OpenBenchmarking.org GFLOPS, More Is Better ArrayFire 3.9 Test: BLAS CPU FP32 a b 90 180 270 360 450 394.69 106.24 1. (CXX) g++ options: -O3
Java SciMark Computational Test: Composite OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Composite a b 600 1200 1800 2400 3000 2908.79 2916.23
Java SciMark Computational Test: Monte Carlo OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Monte Carlo a b 300 600 900 1200 1500 1245.28 1246.36
Java SciMark Computational Test: Fast Fourier Transform OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Fast Fourier Transform a b 160 320 480 640 800 719.93 725.12
Java SciMark Computational Test: Sparse Matrix Multiply OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Sparse Matrix Multiply a b 800 1600 2400 3200 4000 3708.47 3733.82
Java SciMark Computational Test: Dense LU Matrix Factorization OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Dense LU Matrix Factorization a b 1400 2800 4200 5600 7000 6529.95 6531.95
Java SciMark Computational Test: Jacobi Successive Over-Relaxation OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Jacobi Successive Over-Relaxation a b 500 1000 1500 2000 2500 2340.35 2343.93
WebP2 Image Encode Encode Settings: Default OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default a b 2 4 6 8 10 7.05 7.04 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 75, Compression Effort 7 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 75, Compression Effort 7 a b 0.0203 0.0406 0.0609 0.0812 0.1015 0.09 0.09 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 95, Compression Effort 7 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 95, Compression Effort 7 a b 0.009 0.018 0.027 0.036 0.045 0.04 0.04 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 100, Compression Effort 5 OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 100, Compression Effort 5 a b 0.81 1.62 2.43 3.24 4.05 3.60 3.15 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 100, Lossless Compression OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Quality 100, Lossless Compression a b 0.0023 0.0046 0.0069 0.0092 0.0115 0.01 0.01 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
ArrayFire Test: Conjugate Gradient CPU OpenBenchmarking.org ms, Fewer Is Better ArrayFire 3.9 Test: Conjugate Gradient CPU a b 3 6 9 12 15 13.42 13.12 1. (CXX) g++ options: -O3
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.0 Blend File: BMW27 - Compute: CPU-Only a b 70 140 210 280 350 233.56 308.05
Phoronix Test Suite v10.8.5