dsdfds

Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1662 BIOS) and ASUS Intel RPL-S 16GB on Ubuntu 24.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2410191-PTS-DSDFDS2287
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

CPU Massive 2 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 3 Tests
Multi-Core 4 Tests
Server CPU Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
October 19
  1 Hour, 18 Minutes
b
October 19
  1 Hour, 23 Minutes
Invert Hiding All Results Option
  1 Hour, 20 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


dsdfdsOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (1662 BIOS)Intel Raptor Lake-S PCH2 x 16GB DDR5-6000MT/s Corsair CMK32GX5M2B6000C36Western Digital WD_BLACK SN850X 2000GBASUS Intel RPL-S 16GBRealtek ALC897ASUS VP28UUbuntu 24.046.10.0-061000rc6daily20240706-generic (x86_64)GNOME Shell 46.0X Server 1.21.1.11 + Wayland4.6 Mesa 24.2~git2407080600.801ed4~oibaf~n (git-801ed4d 2024-07-08 noble-oibaf-ppa)GCC 13.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionDsdfds PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x129 - Thermald 2.5.6- OpenJDK Runtime Environment (build 21.0.3+9-Ubuntu-1ubuntu1)- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Mitigation of Clear Register File + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; RSB filling; PBRSB-eIBRS: SW sequence; BHI: BHI_DIS_S + srbds: Not affected + tsx_async_abort: Not affected

a vs. b ComparisonPhoronix Test SuiteBaseline+10.2%+10.2%+20.4%+20.4%+30.6%+30.6%40.6%14.1%9.9%9.7%5.2%4.6%3.7%2.9%NASNet MobileDeepLab V329.4%FP32MobileNetV122.9%FP16MobileNetV117.4%squeezenetv1.1Mobilenet QuantmobilenetV3I.R.V7.4%MobileNetV2_2246.3%FP16MobileNetV3LargeA.w.3.5.A5.1%Q.C.S.M.vWrites4.2%Whetstone DoubleR.N.N.I - CPUFP32MobileNetV3Large2.8%LiteRTLiteRTXNNPACKXNNPACKMobile Neural NetworkLiteRTMobile Neural NetworkLiteRTMobile Neural NetworkXNNPACKNAMDLiteRTApache CassandraBYTE Unix BenchmarkoneDNNXNNPACKab

dsdfdslitert: NASNet Mobilelitert: DeepLab V3xnnpack: FP32MobileNetV1xnnpack: FP16MobileNetV1mnn: squeezenetv1.1litert: Mobilenet Quantmnn: mobilenetV3litert: Inception ResNet V2mnn: MobileNetV2_224xnnpack: FP16MobileNetV3Largenamd: ATPase with 327,506 Atomslitert: Quantized COCO SSD MobileNet v1cassandra: Writesbyte: Whetstone Doubleonednn: Recurrent Neural Network Inference - CPUxnnpack: FP32MobileNetV3Largexnnpack: FP32MobileNetV2xnnpack: FP32MobileNetV3Smallxnnpack: FP16MobileNetV3Smallmnn: resnet-v2-50litert: Inception V4xnnpack: QS8MobileNetV2build2: Time To Compilebyte: Dhrystone 2mnn: SqueezeNetV1.0litert: SqueezeNetmnn: nasnetlitert: Mobilenet Floatnamd: STMV with 1,066,628 Atomsbyte: Pipeonednn: IP Shapes 1D - CPUonednn: Deconvolution Batch shapes_3d - CPUmnn: inception-v3onednn: Deconvolution Batch shapes_1d - CPUbyte: System Callonednn: IP Shapes 3D - CPUxnnpack: FP16MobileNetV2onednn: Convolution Batch Shapes Auto - CPUmnn: mobilenet-v1-1.0onednn: Recurrent Neural Network Training - CPUab19510.74619.23105216671.8183256.230.97328238.61.81515741.528463200.84306186307795.51116.111285103572884112.62917993.178688.3831899605032.62.721382.176.746986.1470.4506172384778.61.992764.0316215.8323.6807965518076.86.6761413616.185772.0612118.2213872.35977.32129319571.5942963.550.88730330.41.9314961.454193059.11293824319189.91085.071321101671682812.4417757.977687.5281917381047.32.6971371.346.698992.6390.4476771939326.61.982514.0143115.8873.6704865688838.36.6597513586.194642.0632116.9OpenBenchmarking.org

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: NASNet Mobileab4K8K12K16K20K19510.713872.3

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: DeepLab V3ab130026003900520065004619.235977.32

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV1ab30060090012001500105212931. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV1ab400800120016002000166719571. (CXX) g++ options: -O3 -lrt -lm

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: squeezenetv1.1ab0.40910.81821.22731.63642.04551.8181.594MIN: 1.61 / MAX: 3.77MIN: 1.57 / MAX: 1.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Quantab70014002100280035003256.232963.55

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: mobilenetV3ab0.21890.43780.65670.87561.09450.9730.887MIN: 0.88 / MAX: 1.68MIN: 0.86 / MAX: 1.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception ResNet V2ab6K12K18K24K30K28238.630330.4

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: MobileNetV2_224ab0.43430.86861.30291.73722.17151.8151.930MIN: 1.71 / MAX: 2.73MIN: 1.71 / MAX: 2.921. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Largeab30060090012001500157414961. (CXX) g++ options: -O3 -lrt -lm

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: ATPase with 327,506 Atomsab0.34390.68781.03171.37561.71951.528461.45419

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Quantized COCO SSD MobileNet v1ab70014002100280035003200.843059.11

Apache Cassandra

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 5.0Test: Writesab70K140K210K280K350K306186293824

BYTE Unix Benchmark

OpenBenchmarking.orgMWIPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Whetstone Doubleab70K140K210K280K350K307795.5319189.91. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Inference - Engine: CPUab20040060080010001116.111085.07MIN: 1040.59MIN: 1043.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Largeab30060090012001500128513211. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV2ab2004006008001000103510161. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Smallab1603204806408007287161. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Smallab20040060080010008418281. (CXX) g++ options: -O3 -lrt -lm

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: resnet-v2-50ab369121512.6312.44MIN: 11.96 / MAX: 24.59MIN: 11.71 / MAX: 25.471. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception V4ab4K8K12K16K20K17993.117757.9

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: QS8MobileNetV2ab20040060080010007867761. (CXX) g++ options: -O3 -lrt -lm

Build2

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.17Time To Compileab2040608010088.3887.53

BYTE Unix Benchmark

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Dhrystone 2ab400M800M1200M1600M2000M1899605032.61917381047.31. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: SqueezeNetV1.0ab0.6121.2241.8362.4483.062.7202.697MIN: 2.5 / MAX: 4.26MIN: 2.49 / MAX: 4.31. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: SqueezeNetab300600900120015001382.171371.34

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: nasnetab2468106.7466.698MIN: 6.27 / MAX: 14.18MIN: 6.26 / MAX: 13.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Floatab2004006008001000986.15992.64

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: STMV with 1,066,628 Atomsab0.10140.20280.30420.40560.5070.450610.44767

BYTE Unix Benchmark

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Pipeab16M32M48M64M80M72384778.671939326.61. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 1D - Engine: CPUab0.44840.89681.34521.79362.2421.992761.98251MIN: 1.81MIN: 1.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_3d - Engine: CPUab0.90711.81422.72133.62844.53554.031624.01431MIN: 3.78MIN: 3.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: inception-v3ab4812162015.8315.89MIN: 14.78 / MAX: 27.51MIN: 14.84 / MAX: 27.531. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_1d - Engine: CPUab0.82821.65642.48463.31284.1413.680793.67048MIN: 3.37MIN: 3.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

BYTE Unix Benchmark

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: System Callab14M28M42M56M70M65518076.865688838.31. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 3D - Engine: CPUab2468106.676146.65975MIN: 6.22MIN: 6.251. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV2ab30060090012001500136113581. (CXX) g++ options: -O3 -lrt -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Convolution Batch Shapes Auto - Engine: CPUab2468106.185776.19464MIN: 5.97MIN: 5.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.9.b11b7037dModel: mobilenet-v1-1.0ab0.46420.92841.39261.85682.3212.0612.063MIN: 1.8 / MAX: 5.06MIN: 1.81 / MAX: 5.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Training - Engine: CPUab50010001500200025002118.222116.90MIN: 2050.34MIN: 2049.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl