new tests eo nov

Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1402 BIOS) and AMD Radeon 15GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2311285-PTS-NEWTESTS44&grw&sor.

new tests eo novProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionabcdeIntel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (1402 BIOS)Intel Device 7a2732GBWestern Digital WD_BLACK SN850X 1000GBAMD Radeon 15GB (1617/1124MHz)Realtek ALC897ASUS VP28UUbuntu 23.106.5.0-10-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.7 + Wayland4.6 Mesa 24.0~git2311100600.05fb6b~oibaf~m (git-05fb6b9 2023-11-10 mantic-oibaf-ppa) (LLVM 16.0.6 DRM 3.54)GCC 13.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave (EPP: performance) - CPU Microcode: 0x11d - Thermald 2.5.4Java Details- OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu1)Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

new tests eo novjava-scimark2: Compositejava-scimark2: Monte Carlojava-scimark2: Fast Fourier Transformjava-scimark2: Sparse Matrix Multiplyjava-scimark2: Dense LU Matrix Factorizationjava-scimark2: Jacobi Successive Over-Relaxationwebp2: Defaultwebp2: Quality 75, Compression Effort 7webp2: Quality 95, Compression Effort 7webp2: Quality 100, Compression Effort 5webp2: Quality 100, Lossless Compressionpytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 256 - ResNet-50pytorch: CPU - 32 - ResNet-152openssl: RSA4096pytorch: CPU - 512 - ResNet-50openssl: RSA4096openssl: SHA512openssl: SHA256pytorch: CPU - 64 - ResNet-50pytorch: CPU - 64 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 512 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 512 - Efficientnet_v2_lembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objabcde4716.011567.511219.734792.0413059.892940.8715.780.350.168.830.0458.8428.7146.3246.4817.1739.0518.04347145.546.885360108494812103547495301046.8214.8918.0817.1313.508.7910.098.858.948.9030.071330.478334.44631.151636.212431.8584785.171567.511232.024790.6413387.722947.9415.890.330.168.960.0460.1128.9744.4746.7317.6339.3417.12355954.346.765476110625787603599866602038.9318.4217.7118.1513.4211.658.8210.5111.9811.6730.059830.247134.442431.122436.408631.74734773.581556.151232.914789.2413341.672947.9415.400.169.930.0474.5422.3546.4746.7617.9947.0016.90359762.446.325536109757274003576233688046.9514.6518.0417.9713.438.888.8911.608.9510.5030.30730.201334.479431.088936.247931.7234772.91567.511230.694780.8613337.52947.9416.240.340.167.650.0475.6722.7344.1446.3015.1447.8318.14351474.246.405410.1108158572303562539134044.5218.0814.8818.0813.488.8011.728.9211.598.9629.992330.191334.377631.266336.421731.65494779.521568.081231.134794.8513354.22949.3615.650.340.169.980.0474.0322.4538.6846.4918.1739.2918.24352828.946.585429.5110063308703556712154046.6718.0918.3018.0713.4811.8612.098.878.7811.6930.096430.273134.503231.129436.222731.8813OpenBenchmarking.org

Java SciMark

Computational Test: Composite

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Compositebecda100020003000400050004785.174779.524773.584772.904716.01

Java SciMark

Computational Test: Monte Carlo

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Monte Carloedbac300600900120015001568.081567.511567.511567.511556.15

Java SciMark

Computational Test: Fast Fourier Transform

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Fast Fourier Transformcbeda300600900120015001232.911232.021231.131230.691219.73

Java SciMark

Computational Test: Sparse Matrix Multiply

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Sparse Matrix Multiplyeabcd100020003000400050004794.854792.044790.644789.244780.86

Java SciMark

Computational Test: Dense LU Matrix Factorization

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Dense LU Matrix Factorizationbecda3K6K9K12K15K13387.7213354.2013341.6713337.5013059.89

Java SciMark

Computational Test: Jacobi Successive Over-Relaxation

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Jacobi Successive Over-Relaxationedcba60012001800240030002949.362947.942947.942947.942940.87

WebP2 Image Encode

Encode Settings: Default

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Defaultdbaec4812162016.2415.8915.7815.6515.401. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 75, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7aedb0.07880.15760.23640.31520.3940.350.340.340.331. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 95, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7edcba0.0360.0720.1080.1440.180.160.160.160.160.161. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 100, Compression Effort 5

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5ecbad36912159.989.938.968.837.651. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP2 Image Encode

Encode Settings: Quality 100, Lossless Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Lossless Compressionedcba0.0090.0180.0270.0360.0450.040.040.040.040.041. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-50dceba2040608010075.6774.5474.0360.1158.84MIN: 72.62 / MAX: 75.95MIN: 71.89 / MAX: 75.12MIN: 71.54 / MAX: 75.27MIN: 59.29 / MAX: 71.98MIN: 57.25 / MAX: 68.79

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-152badec71421283528.9728.7122.7322.4522.35MIN: 7.95 / MAX: 29.71MIN: 8.87 / MAX: 29.47MIN: 22.46 / MAX: 27.6MIN: 22.19 / MAX: 27.15MIN: 22.12 / MAX: 27.32

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-50cabde112233445546.4746.3244.4744.1438.68MIN: 11.72 / MAX: 48.37MIN: 12.67 / MAX: 49.3MIN: 12.19 / MAX: 46.31MIN: 11.59 / MAX: 46.03MIN: 9.93 / MAX: 46.76

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50cbead112233445546.7646.7346.4946.4846.30MIN: 12.51 / MAX: 48.77MIN: 12.71 / MAX: 49.06MIN: 11.98 / MAX: 48.65MIN: 14.21 / MAX: 48.5MIN: 12.41 / MAX: 48.18

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152ecbad4812162018.1717.9917.6317.1715.14MIN: 9.41 / MAX: 18.96MIN: 7.44 / MAX: 18.86MIN: 8.26 / MAX: 18.7MIN: 7.36 / MAX: 17.91MIN: 5.99 / MAX: 17.74

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-50dcbea112233445547.8347.0039.3439.2939.05MIN: 16.97 / MAX: 49.88MIN: 11.89 / MAX: 48.99MIN: 10.03 / MAX: 46.87MIN: 10.75 / MAX: 42.24MIN: 10.5 / MAX: 40.73

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152edabc4812162018.2418.1418.0417.1216.90MIN: 8.5 / MAX: 19.02MIN: 9.88 / MAX: 18.9MIN: 9.24 / MAX: 18.83MIN: 6.99 / MAX: 17.92MIN: 6.08 / MAX: 17.69

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLAlgorithm: RSA4096cbeda80K160K240K320K400K359762.4355954.3352828.9351474.2347145.51. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-50abedc112233445546.8846.7646.5846.4046.32MIN: 15.71 / MAX: 48.91MIN: 16.58 / MAX: 48.66MIN: 12.98 / MAX: 49.17MIN: 11.75 / MAX: 48.73MIN: 12.42 / MAX: 48.73

OpenSSL

Algorithm: RSA4096

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLAlgorithm: RSA4096cbeda120024003600480060005536.05476.05429.55410.15360.01. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

OpenSSL

Algorithm: SHA512

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA512becad2000M4000M6000M8000M10000M11062578760110063308701097572740010849481210108158572301. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

OpenSSL

Algorithm: SHA256

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: SHA256bcdea8000M16000M24000M32000M40000M35998666020357623368803562539134035567121540354749530101. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-50caedb112233445546.9546.8246.6744.5238.93MIN: 11.8 / MAX: 48.85MIN: 12.21 / MAX: 48.82MIN: 15.18 / MAX: 48.54MIN: 13.09 / MAX: 46.68MIN: 10.46 / MAX: 47.12

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: ResNet-152bedac51015202518.4218.0918.0814.8914.65MIN: 9.4 / MAX: 19.35MIN: 8.97 / MAX: 18.85MIN: 8.98 / MAX: 18.85MIN: 6.18 / MAX: 17.49MIN: 6.03 / MAX: 16.53

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: ResNet-152eacbd51015202518.3018.0818.0417.7114.88MIN: 10.64 / MAX: 19.07MIN: 10.66 / MAX: 18.86MIN: 8.26 / MAX: 18.82MIN: 6.7 / MAX: 18.52MIN: 6.24 / MAX: 18.19

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: ResNet-152bdeca4812162018.1518.0818.0717.9717.13MIN: 6.25 / MAX: 18.92MIN: 11.59 / MAX: 18.87MIN: 7.25 / MAX: 18.84MIN: 8.82 / MAX: 18.73MIN: 6.62 / MAX: 17.92

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_laedcb369121513.5013.4813.4813.4313.42MIN: 11.22 / MAX: 17.95MIN: 10.66 / MAX: 17.95MIN: 11.3 / MAX: 17.93MIN: 10.97 / MAX: 17.86MIN: 10.94 / MAX: 18.06

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lebcda369121511.8611.658.888.808.79MIN: 5.09 / MAX: 12.28MIN: 5.27 / MAX: 12.15MIN: 4.94 / MAX: 9.08MIN: 5.17 / MAX: 8.94MIN: 4.35 / MAX: 9.75

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_ledacb369121512.0911.7210.098.898.82MIN: 5.97 / MAX: 12.57MIN: 5.22 / MAX: 12.2MIN: 4.54 / MAX: 12.07MIN: 3.85 / MAX: 9.08MIN: 5.03 / MAX: 9.05

PyTorch

Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lcbdea369121511.6010.518.928.878.85MIN: 5.57 / MAX: 12.14MIN: 4.81 / MAX: 11.29MIN: 4.97 / MAX: 9.08MIN: 4.6 / MAX: 9.03MIN: 4.16 / MAX: 9.06

PyTorch

Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lbdcae369121511.9811.598.958.948.78MIN: 5.05 / MAX: 12.49MIN: 5.55 / MAX: 12.11MIN: 4.32 / MAX: 9MIN: 5 / MAX: 9.09MIN: 4.28 / MAX: 9.7

PyTorch

Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_lebcda369121511.6911.6710.508.968.90MIN: 5.58 / MAX: 12.18MIN: 5.41 / MAX: 12.17MIN: 4.78 / MAX: 10.98MIN: 4.16 / MAX: 9.35MIN: 4.78 / MAX: 9.1

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crownceabd71421283530.3130.1030.0730.0629.99MIN: 29.74 / MAX: 31.91MIN: 29.54 / MAX: 31.88MIN: 29.59 / MAX: 31.69MIN: 29.51 / MAX: 31.67MIN: 29.36 / MAX: 31.72

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crownaebcd71421283530.4830.2730.2530.2030.19MIN: 29.84 / MAX: 32.11MIN: 29.65 / MAX: 32.08MIN: 29.79 / MAX: 32.07MIN: 29.65 / MAX: 31.81MIN: 29.61 / MAX: 31.75

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragonecabd81624324034.5034.4834.4534.4434.38MIN: 33.99 / MAX: 35.71MIN: 33.82 / MAX: 35.74MIN: 33.88 / MAX: 35.95MIN: 33.88 / MAX: 35.6MIN: 33.76 / MAX: 35.61

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objdaebc71421283531.2731.1531.1331.1231.09MIN: 30.85 / MAX: 31.86MIN: 30.34 / MAX: 32.27MIN: 30.37 / MAX: 32.25MIN: 30.41 / MAX: 32.05MIN: 30.45 / MAX: 32.14

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragondbcea81624324036.4236.4136.2536.2236.21MIN: 35.89 / MAX: 38.27MIN: 35.88 / MAX: 37.89MIN: 35.88 / MAX: 36.94MIN: 35.75 / MAX: 37.76MIN: 35.74 / MAX: 37.79

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objeabcd71421283531.8831.8631.7531.7231.65MIN: 31.49 / MAX: 33.08MIN: 31.53 / MAX: 32.96MIN: 31.42 / MAX: 32.35MIN: 31.39 / MAX: 32.38MIN: 31.25 / MAX: 33.03


Phoronix Test Suite v10.8.5