new tests eo nov

Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1402 BIOS) and AMD Radeon 15GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2311285-PTS-NEWTESTS44&grw&rdt.

new tests eo novProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionabcdeIntel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (1402 BIOS)Intel Device 7a2732GBWestern Digital WD_BLACK SN850X 1000GBAMD Radeon 15GB (1617/1124MHz)Realtek ALC897ASUS VP28UUbuntu 23.106.5.0-10-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.7 + Wayland4.6 Mesa 24.0~git2311100600.05fb6b~oibaf~m (git-05fb6b9 2023-11-10 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.54)GCC 13.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: performance) - CPU Microcode: 0x11d - Thermald 2.5.4Java Details- OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu1)Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

new tests eo novjava-scimark2: Compositejava-scimark2: Monte Carlojava-scimark2: Fast Fourier Transformjava-scimark2: Sparse Matrix Multiplyjava-scimark2: Dense LU Matrix Factorizationjava-scimark2: Jacobi Successive Over-Relaxationwebp2: Defaultwebp2: Quality 75, Compression Effort 7webp2: Quality 95, Compression Effort 7webp2: Quality 100, Compression Effort 5webp2: Quality 100, Lossless Compressionpytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 256 - ResNet-50pytorch: CPU - 32 - ResNet-152openssl: RSA4096pytorch: CPU - 512 - ResNet-50openssl: RSA4096openssl: SHA512openssl: SHA256pytorch: CPU - 64 - ResNet-50pytorch: CPU - 64 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 512 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 512 - Efficientnet_v2_lembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objabcde4716.011567.511219.734792.0413059.892940.8715.780.350.168.830.0458.8428.7146.3246.4817.1739.0518.04347145.546.885360108494812103547495301046.8214.8918.0817.1313.508.7910.098.858.948.9030.071330.478334.44631.151636.212431.8584785.171567.511232.024790.6413387.722947.9415.890.330.168.960.0460.1128.9744.4746.7317.6339.3417.12355954.346.765476110625787603599866602038.9318.4217.7118.1513.4211.658.8210.5111.9811.6730.059830.247134.442431.122436.408631.74734773.581556.151232.914789.2413341.672947.9415.400.169.930.0474.5422.3546.4746.7617.9947.0016.90359762.446.325536109757274003576233688046.9514.6518.0417.9713.438.888.8911.608.9510.5030.30730.201334.479431.088936.247931.7234772.91567.511230.694780.8613337.52947.9416.240.340.167.650.0475.6722.7344.1446.3015.1447.8318.14351474.246.405410.1108158572303562539134044.5218.0814.8818.0813.488.8011.728.9211.598.9629.992330.191334.377631.266336.421731.65494779.521568.081231.134794.8513354.22949.3615.650.340.169.980.0474.0322.4538.6846.4918.1739.2918.24352828.946.585429.5110063308703556712154046.6718.0918.3018.0713.4811.8612.098.878.7811.6930.096430.273134.503231.129436.222731.8813OpenBenchmarking.org

Java SciMark

Computational Test: Composite

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Compositeabcde100020003000400050004716.014785.174773.584772.904779.52

Java SciMark

Computational Test: Monte Carlo

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Monte Carloabcde300600900120015001567.511567.511556.151567.511568.08

Java SciMark

Computational Test: Fast Fourier Transform

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Fast Fourier Transformabcde300600900120015001219.731232.021232.911230.691231.13

Java SciMark

Computational Test: Sparse Matrix Multiply

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Sparse Matrix Multiplyabcde100020003000400050004792.044790.644789.244780.864794.85

Java SciMark

Computational Test: Dense LU Matrix Factorization

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Dense LU Matrix Factorizationabcde3K6K9K12K15K13059.8913387.7213341.6713337.5013354.20

Java SciMark

Computational Test: Jacobi Successive Over-Relaxation

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Jacobi Successive Over-Relaxationabcde60012001800240030002940.872947.942947.942947.942949.36

WebP2 Image Encode

Encode Settings: Default

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Defaultabcde4812162015.7815.8915.4016.2415.651. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 75, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7abde0.07880.15760.23640.31520.3940.350.330.340.341. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 95, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7abcde0.0360.0720.1080.1440.180.160.160.160.160.161. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 100, Compression Effort 5

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5abcde36912158.838.969.937.659.981. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 100, Lossless Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Lossless Compressionabcde0.0090.0180.0270.0360.0450.040.040.040.040.041. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50abcde2040608010058.8460.1174.5475.6774.03MIN: 57.25 / MAX: 68.79MIN: 59.29 / MAX: 71.98MIN: 71.89 / MAX: 75.12MIN: 72.62 / MAX: 75.95MIN: 71.54 / MAX: 75.27

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152abcde71421283528.7128.9722.3522.7322.45MIN: 8.87 / MAX: 29.47MIN: 7.95 / MAX: 29.71MIN: 22.12 / MAX: 27.32MIN: 22.46 / MAX: 27.6MIN: 22.19 / MAX: 27.15

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50abcde112233445546.3244.4746.4744.1438.68MIN: 12.67 / MAX: 49.3MIN: 12.19 / MAX: 46.31MIN: 11.72 / MAX: 48.37MIN: 11.59 / MAX: 46.03MIN: 9.93 / MAX: 46.76

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50abcde112233445546.4846.7346.7646.3046.49MIN: 14.21 / MAX: 48.5MIN: 12.71 / MAX: 49.06MIN: 12.51 / MAX: 48.77MIN: 12.41 / MAX: 48.18MIN: 11.98 / MAX: 48.65

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152abcde4812162017.1717.6317.9915.1418.17MIN: 7.36 / MAX: 17.91MIN: 8.26 / MAX: 18.7MIN: 7.44 / MAX: 18.86MIN: 5.99 / MAX: 17.74MIN: 9.41 / MAX: 18.96

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50abcde112233445539.0539.3447.0047.8339.29MIN: 10.5 / MAX: 40.73MIN: 10.03 / MAX: 46.87MIN: 11.89 / MAX: 48.99MIN: 16.97 / MAX: 49.88MIN: 10.75 / MAX: 42.24

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152abcde4812162018.0417.1216.9018.1418.24MIN: 9.24 / MAX: 18.83MIN: 6.99 / MAX: 17.92MIN: 6.08 / MAX: 17.69MIN: 9.88 / MAX: 18.9MIN: 8.5 / MAX: 19.02

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLAlgorithm: RSA4096abcde80K160K240K320K400K347145.5355954.3359762.4351474.2352828.91. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-50abcde112233445546.8846.7646.3246.4046.58MIN: 15.71 / MAX: 48.91MIN: 16.58 / MAX: 48.66MIN: 12.42 / MAX: 48.73MIN: 11.75 / MAX: 48.73MIN: 12.98 / MAX: 49.17

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLAlgorithm: RSA4096abcde120024003600480060005360.05476.05536.05410.15429.51. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

OpenSSL

Algorithm: SHA512

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA512abcde2000M4000M6000M8000M10000M10849481210110625787601097572740010815857230110063308701. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA256abcde8000M16000M24000M32000M40000M35474953010359986660203576233688035625391340355671215401. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50abcde112233445546.8238.9346.9544.5246.67MIN: 12.21 / MAX: 48.82MIN: 10.46 / MAX: 47.12MIN: 11.8 / MAX: 48.85MIN: 13.09 / MAX: 46.68MIN: 15.18 / MAX: 48.54

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152abcde51015202514.8918.4214.6518.0818.09MIN: 6.18 / MAX: 17.49MIN: 9.4 / MAX: 19.35MIN: 6.03 / MAX: 16.53MIN: 8.98 / MAX: 18.85MIN: 8.97 / MAX: 18.85

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152abcde51015202518.0817.7118.0414.8818.30MIN: 10.66 / MAX: 18.86MIN: 6.7 / MAX: 18.52MIN: 8.26 / MAX: 18.82MIN: 6.24 / MAX: 18.19MIN: 10.64 / MAX: 19.07

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-152abcde4812162017.1318.1517.9718.0818.07MIN: 6.62 / MAX: 17.92MIN: 6.25 / MAX: 18.92MIN: 8.82 / MAX: 18.73MIN: 11.59 / MAX: 18.87MIN: 7.25 / MAX: 18.84

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_labcde369121513.5013.4213.4313.4813.48MIN: 11.22 / MAX: 17.95MIN: 10.94 / MAX: 18.06MIN: 10.97 / MAX: 17.86MIN: 11.3 / MAX: 17.93MIN: 10.66 / MAX: 17.95

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_labcde36912158.7911.658.888.8011.86MIN: 4.35 / MAX: 9.75MIN: 5.27 / MAX: 12.15MIN: 4.94 / MAX: 9.08MIN: 5.17 / MAX: 8.94MIN: 5.09 / MAX: 12.28

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_labcde369121510.098.828.8911.7212.09MIN: 4.54 / MAX: 12.07MIN: 5.03 / MAX: 9.05MIN: 3.85 / MAX: 9.08MIN: 5.22 / MAX: 12.2MIN: 5.97 / MAX: 12.57

PyTorch

Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_labcde36912158.8510.5111.608.928.87MIN: 4.16 / MAX: 9.06MIN: 4.81 / MAX: 11.29MIN: 5.57 / MAX: 12.14MIN: 4.97 / MAX: 9.08MIN: 4.6 / MAX: 9.03

PyTorch

Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_labcde36912158.9411.988.9511.598.78MIN: 5 / MAX: 9.09MIN: 5.05 / MAX: 12.49MIN: 4.32 / MAX: 9MIN: 5.55 / MAX: 12.11MIN: 4.28 / MAX: 9.7

PyTorch

Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_labcde36912158.9011.6710.508.9611.69MIN: 4.78 / MAX: 9.1MIN: 5.41 / MAX: 12.17MIN: 4.78 / MAX: 10.98MIN: 4.16 / MAX: 9.35MIN: 5.58 / MAX: 12.18

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crownabcde71421283530.0730.0630.3129.9930.10MIN: 29.59 / MAX: 31.69MIN: 29.51 / MAX: 31.67MIN: 29.74 / MAX: 31.91MIN: 29.36 / MAX: 31.72MIN: 29.54 / MAX: 31.88

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crownabcde71421283530.4830.2530.2030.1930.27MIN: 29.84 / MAX: 32.11MIN: 29.79 / MAX: 32.07MIN: 29.65 / MAX: 31.81MIN: 29.61 / MAX: 31.75MIN: 29.65 / MAX: 32.08

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragonabcde81624324034.4534.4434.4834.3834.50MIN: 33.88 / MAX: 35.95MIN: 33.88 / MAX: 35.6MIN: 33.82 / MAX: 35.74MIN: 33.76 / MAX: 35.61MIN: 33.99 / MAX: 35.71

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objabcde71421283531.1531.1231.0931.2731.13MIN: 30.34 / MAX: 32.27MIN: 30.41 / MAX: 32.05MIN: 30.45 / MAX: 32.14MIN: 30.85 / MAX: 31.86MIN: 30.37 / MAX: 32.25

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragonabcde81624324036.2136.4136.2536.4236.22MIN: 35.74 / MAX: 37.79MIN: 35.88 / MAX: 37.89MIN: 35.88 / MAX: 36.94MIN: 35.89 / MAX: 38.27MIN: 35.75 / MAX: 37.76

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objabcde71421283531.8631.7531.7231.6531.88MIN: 31.53 / MAX: 32.96MIN: 31.42 / MAX: 32.35MIN: 31.39 / MAX: 32.38MIN: 31.25 / MAX: 33.03MIN: 31.49 / MAX: 33.08


Phoronix Test Suite v10.8.5