extra cpu tests

Intel Core i9-11900K testing with a ASUS ROG MAXIMUS XIII HERO (1402 BIOS) and AMD Radeon RX 6800 XT 16GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209122-NE-EXTRACPUT57
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Ryzen 7 5700G
September 01 2022
  2 Hours, 30 Minutes
Ryzen 7 5800X
September 03 2022
  2 Hours, 10 Minutes
Ryzen 9 3900X
September 03 2022
  4 Hours, 29 Minutes
Ryzen 9 3950X
September 06 2022
  1 Hour, 13 Minutes
Ryzen 9 5950X
September 06 2022
  57 Minutes
AMD 5950X
September 06 2022
  1 Hour, 3 Minutes
Core i9 12900K
September 08 2022
  1 Hour, 29 Minutes
Core i5 12600K
September 10 2022
  2 Hours, 8 Minutes
Core i9 11900K
September 12 2022
  2 Hours, 16 Minutes
Invert Behavior (Only Show Selected Data)
  2 Hours, 2 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


extra cpu testsProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950XAMD 5950XCore i9 12900KCore i5 12600KCore i9 11900KAMD Ryzen 7 5700G @ 4.67GHz (8 Cores / 16 Threads)ASUS ROG CROSSHAIR VIII HERO (WI-FI) (4006 BIOS)AMD Renoir/Cezanne32GB2000GB Samsung SSD 980 PRO 2TB + 2000GBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)AMD Navi 21 HDMI AudioASUS MG28URealtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200Ubuntu 22.046.0.0-060000rc1daily20220820-generic (x86_64)GNOME Shell 42.2X Server + Wayland4.6 Mesa 22.3.0-devel (git-4685385 2022-08-23 jammy-oibaf-ppa) (LLVM 14.0.6 DRM 3.48)1.3.224GCC 12.0.1 20220319ext43840x2160AMD Ryzen 7 5800X 8-Core @ 3.80GHz (8 Cores / 16 Threads)AMD Starship/MatisseAMD Ryzen 9 3900X 12-Core @ 3.80GHz (12 Cores / 24 Threads)AMD Ryzen 9 3950X 16-Core @ 3.50GHz (16 Cores / 32 Threads)AMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads)Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads)ASUS ROG STRIX Z690-E GAMING WIFI (1720 BIOS)Intel Device 7aa7Intel Device 7ad0ASUS VP28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Intel Core i5-12600K @ 4.90GHz (10 Cores / 16 Threads)Intel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads)ASUS ROG MAXIMUS XIII HERO (1402 BIOS)Intel Tiger Lake-HIntel Tiger Lake-H HD Audio2 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseEnvironment Details- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Ryzen 7 5700G: Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xa50000c- Ryzen 7 5800X: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa201016- Ryzen 9 3900X: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x8701021- Ryzen 9 3950X: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x8701021- Ryzen 9 5950X: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa201016- AMD 5950X: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa201016- Core i9 12900K: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x23 - Thermald 2.4.9- Core i5 12600K: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x23 - Thermald 2.4.9- Core i9 11900K: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x53 - Thermald 2.4.9Python Details- Python 3.10.4Security Details- Ryzen 7 5700G: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected- Ryzen 7 5800X: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected- Ryzen 9 3900X: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected- Ryzen 9 3950X: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected- Ryzen 9 5950X: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected- AMD 5950X: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected- Core i9 12900K: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected- Core i5 12600K: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected- Core i9 11900K: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Ryzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950XAMD 5950XCore i9 12900KCore i5 12600KCore i9 11900KResult OverviewPhoronix Test Suite100%149%197%246%294%NatronMobile Neural Network7-Zip CompressionTimed PHP CompilationOpenVINO

extra cpu testsopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUmnn: mobilenet-v1-1.0openvino: Face Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUnatron: Spaceshipmnn: MobileNetV2_224openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUmnn: mobilenetV3openvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUrocksdb: Rand Readmnn: nasnetopenvino: Vehicle Detection FP16-INT8 - CPUcompress-7zip: Decompression Ratingopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUmnn: squeezenetv1.1openvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUbrl-cad: VGR Performance Metricmnn: resnet-v2-50openvino: Person Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUcompress-7zip: Compression Ratingopenvino: Person Detection FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUrocksdb: Read While Writingopenvino: Face Detection FP16 - CPUmnn: SqueezeNetV1.0blender: BMW27 - CPU-Onlybuild-php: Time To Compileopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Face Detection FP16 - CPUblender: Barbershop - CPU-Onlyblender: Classroom - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyopenvino: Person Detection FP16 - CPUblender: Fishy Cat - CPU-Onlyopenvino: Person Detection FP32 - CPUrocksdb: Update Randrocksdb: Read Rand Write Randmnn: inception-v3couchdb: 100 - 1000 - 30webp2: Quality 75, Compression Effort 7webp2: Quality 95, Compression Effort 7webp2: Defaultwebp2: Quality 100, Compression Effort 5webp: Quality 100webp: Quality 100, Lossless, Highest Compressionwebp: Defaultwebp: Quality 100, Losslesswebp: Quality 100, Highest Compressionencode-flac: WAV To FLACwebp2: Quality 100, Lossless CompressionRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950XAMD 5950XCore i9 12900KCore i5 12600KCore i9 11900K14.90730.58138.982.3475.4717.42536.552.92.4390.621.13028.7513.038.826306.65722286674.8023.112.532307.90229.3723.2641.431.19172.93703021.4312657.332.004.76070.35712.971992.302775.382765.9529.66413.54657.33196.761.5956.0814.29590.523.41.7240.540.88320.329.686.626412.78774629100.4528.672.031420.26279.6318.1181.800.88139.43790901.7814815.542.383.86557.4189.511668.032206.442228.9022.55328.441406.22175.555.8014.2518.67421.771.84.8260.932.37534.1516.8615.842355.688597310039.3929.734.606363.51321.0635.9042.071.19201.66867752.0512874.872.956.98557.47616.492017.292872.192887.4433.27827.471362.52206.463.2375.8420.45582.293.150.932.24638.7116.167558572015.03494.8412263312887.1937.574.343543.33390.8924920331.5322.351.23212.69917572.3717110.5832936803.356.99752.15414.72346.053345.243335.63618421226985534.48384.60316.75819.44236.052.4729.7218.19954.663.73.1290.661.98233.8511.7910.933677.8614225713205.4238.82.915619.79439.5126675221.8292.491.21206.08944782.4524162.823.735.08944.86412.892126.713177.963230.1525.78716.71820.04235.542.5029.7418.19957.13.83.4310.661.92533.9211.8210304888211.693676.4514175013202.0736.133.226617.73439.5126494822.3522.481.21221.12942522.4124180.3340330213.695.25945.03912.942145.243177.653267.99764612276490826.0169.37212.26389.94376.493.07915.3646.771299.075.32.5490.661.16915.928.191264289658.745731.929801610854.5952.673.032597.27341.5733438621.82.841.46113.721343522.7924283.383.754.29268.6739.8310.031591.25791.41196242.032082.3399.832102.26604284298909524.27765.72413.06389.11271.342.30610.1949.71759.883.92.0270.721.18414.738.26791419417.71483.95619825715.8435.582.257410.69200.8820871117.7651.861.68112.32931851.8613656.422021882.374.112106.8251.8329.731670.461235.98302.02374.462135.31153.912130.59598750203006724.63871.9220.160.078.484.3914.220.7921.861.994.3212.9630.026.45332.72101.292.0461225.571237.5622.2160.330.99739.466.74522721918.188592.28600579291.2833.912.27279.54312.5916333521.3531.760.85117.92682681.7623848.8921304802.914.111122.7861.26214.291368.851346.64329.47406.462258.23159.042250.75493414193092127.08374.0980.130.067.563.9513.170.7520.911.954.3912.7770.02OpenBenchmarking.org

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X714212835SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 316.7113.066.4512.2614.9013.5428.4427.4716.75MIN: 8.06 / MAX: 24.96MIN: 10.18 / MAX: 16.8MIN: 3.62 / MAX: 30.17MIN: 9.88 / MAX: 17.42MIN: 7.33 / MAX: 25.89MIN: 7.08 / MAX: 22.51MIN: 14.9 / MAX: 78.52MIN: 18.76 / MAX: 35.74MIN: 11.09 / MAX: 21.331. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X30060090012001500SE +/- 0.28, N = 3SE +/- 0.42, N = 3SE +/- 1.57, N = 3820.04389.11332.72389.94730.58657.331406.221362.52819.44MIN: 793.27 / MAX: 831.81MIN: 329.59 / MAX: 829MIN: 298.23 / MAX: 447.88MIN: 298.89 / MAX: 923.96MIN: 492.86 / MAX: 776.15MIN: 638.88 / MAX: 664.91MIN: 1307.87 / MAX: 1498.4MIN: 1339.1 / MAX: 1376.54MIN: 733.89 / MAX: 843.651. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X80160240320400SE +/- 0.59, N = 3SE +/- 1.21, N = 3SE +/- 0.32, N = 3235.54271.34101.29376.49138.98196.76175.55206.46236.051. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0AMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X1.30522.61043.91565.22086.526SE +/- 0.015, N = 3SE +/- 0.007, N = 3SE +/- 0.039, N = 62.5022.3062.0463.0792.3471.5955.8013.2372.472MIN: 2.47 / MAX: 3.22MIN: 2.29 / MAX: 2.59MIN: 1.77 / MAX: 30.14MIN: 2.87 / MAX: 8.47MIN: 2.22 / MAX: 13.86MIN: 1.57 / MAX: 2.03MIN: 4.6 / MAX: 116.95MIN: 3.16 / MAX: 4.09MIN: 2.44 / MAX: 2.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X48121620SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 39.7410.1912.0015.365.476.084.255.849.721. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X1122334455SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 318.1949.7125.5746.7717.4214.2918.6720.4518.19MIN: 8.69 / MAX: 26.01MIN: 47.84 / MAX: 51.93MIN: 15.77 / MAX: 54.84MIN: 24.98 / MAX: 59.29MIN: 9.07 / MAX: 30.09MIN: 7.5 / MAX: 24.54MIN: 13.72 / MAX: 70.89MIN: 11.74 / MAX: 39.16MIN: 9.4 / MAX: 28.561. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X30060090012001500SE +/- 0.36, N = 3SE +/- 0.42, N = 3SE +/- 0.21, N = 3957.10759.881237.561299.07536.55590.52421.77582.29954.661. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X1.19252.3853.57754.775.9625SE +/- 0.02, N = 9SE +/- 0.03, N = 3SE +/- 0.00, N = 33.83.92.05.32.93.41.83.13.7

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224AMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X1.1252.253.3754.55.625SE +/- 0.041, N = 3SE +/- 0.018, N = 3SE +/- 0.055, N = 63.4312.0272.2162.5492.4391.7244.8265.0003.129MIN: 3.36 / MAX: 4.29MIN: 2.01 / MAX: 2.55MIN: 1.9 / MAX: 22.89MIN: 2.46 / MAX: 3.95MIN: 2.31 / MAX: 14.01MIN: 1.67 / MAX: 2.22MIN: 4.15 / MAX: 118.91MIN: 4.84 / MAX: 5.79MIN: 3.1 / MAX: 3.851. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X0.20930.41860.62790.83721.0465SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.660.720.330.660.620.540.930.930.66MIN: 0.39 / MAX: 9.09MIN: 0.5 / MAX: 1.92MIN: 0.19 / MAX: 13.53MIN: 0.48 / MAX: 3.61MIN: 0.39 / MAX: 12.23MIN: 0.33 / MAX: 10.16MIN: 0.54 / MAX: 34.61MIN: 0.56 / MAX: 9.29MIN: 0.4 / MAX: 9.71. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3AMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X0.53441.06881.60322.13762.672SE +/- 0.009, N = 3SE +/- 0.008, N = 3SE +/- 0.015, N = 61.9251.1840.9971.1691.1300.8832.3752.2461.982MIN: 1.81 / MAX: 2.33MIN: 1.02 / MAX: 2.03MIN: 0.92 / MAX: 4.16MIN: 1.13 / MAX: 1.42MIN: 1.09 / MAX: 10.93MIN: 0.86 / MAX: 4.6MIN: 2.06 / MAX: 89.53MIN: 2.21 / MAX: 3.08MIN: 1.7 / MAX: 28.921. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X918273645SE +/- 0.12, N = 3SE +/- 0.13, N = 3SE +/- 0.06, N = 333.9214.7339.4615.9228.7520.3234.1538.7133.85MIN: 20.26 / MAX: 49.04MIN: 12.62 / MAX: 22.93MIN: 23.24 / MAX: 99.41MIN: 12.39 / MAX: 29.4MIN: 10.6 / MAX: 50.31MIN: 12.41 / MAX: 37.85MIN: 12.29 / MAX: 77.76MIN: 13.3 / MAX: 56.5MIN: 24.98 / MAX: 49.811. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X48121620SE +/- 0.15, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 311.828.266.748.1913.039.6816.8616.1611.79MIN: 6.54 / MAX: 21.16MIN: 7.26 / MAX: 19.99MIN: 3.45 / MAX: 23.72MIN: 6.68 / MAX: 25.81MIN: 6.64 / MAX: 25.26MIN: 5.23 / MAX: 19.09MIN: 10.76 / MAX: 72.48MIN: 9.15 / MAX: 68.05MIN: 7.87 / MAX: 21.311. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random ReadAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 9 3950X30M60M90M120M150M1030488827914194152272191126428965755857201. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X48121620SE +/- 0.109, N = 3SE +/- 0.048, N = 3SE +/- 0.161, N = 611.6937.7108.1888.7458.8266.62615.84215.03010.933MIN: 11.35 / MAX: 20.19MIN: 7.66 / MAX: 9.88MIN: 7.22 / MAX: 33.83MIN: 8.71 / MAX: 9.32MIN: 8.52 / MAX: 21.27MIN: 6.48 / MAX: 7.43MIN: 13.21 / MAX: 143.35MIN: 14.83 / MAX: 22.29MIN: 10.69 / MAX: 11.641. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X160320480640800SE +/- 3.62, N = 3SE +/- 0.90, N = 3SE +/- 0.10, N = 3676.45483.95592.28731.92306.65412.78355.68494.84677.861. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X30K60K90K120K150KSE +/- 179.27, N = 3SE +/- 194.98, N = 3SE +/- 341.93, N = 31417506198260057980167222877462859731226331422571. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X3K6K9K12K15KSE +/- 21.83, N = 3SE +/- 29.14, N = 3SE +/- 27.98, N = 313202.075715.849291.2810854.596674.809100.4510039.3912887.1913205.421. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X1224364860SE +/- 0.11, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 336.1335.5833.9152.6723.1128.6729.7337.5738.801. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1AMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X1.03642.07283.10924.14565.182SE +/- 0.022, N = 3SE +/- 0.016, N = 3SE +/- 0.068, N = 63.2262.2572.2703.0322.5322.0314.6064.3432.915MIN: 3.16 / MAX: 3.37MIN: 2.24 / MAX: 7.91MIN: 2.01 / MAX: 28.53MIN: 2.98 / MAX: 9.97MIN: 2.41 / MAX: 13.37MIN: 1.98 / MAX: 2.5MIN: 3.85 / MAX: 74.81MIN: 4.25 / MAX: 12.46MIN: 2.87 / MAX: 3.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X130260390520650SE +/- 1.36, N = 3SE +/- 0.78, N = 3SE +/- 0.84, N = 3617.73410.69279.54597.27307.90420.26363.51543.33619.791. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X100200300400500SE +/- 0.03, N = 3SE +/- 0.44, N = 3SE +/- 0.67, N = 3439.51200.88312.59341.57229.37279.63321.06390.89439.511. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.6VGR Performance MetricAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 9 3950XRyzen 9 5950X70K140K210K280K350K2649482087111633353343862492032667521. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50AMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X816243240SE +/- 0.18, N = 3SE +/- 0.12, N = 3SE +/- 0.14, N = 622.3517.7721.3521.8023.2618.1235.9031.5321.83MIN: 21.91 / MAX: 28.8MIN: 17.7 / MAX: 23.7MIN: 19.55 / MAX: 59.02MIN: 21.56 / MAX: 33.15MIN: 22.08 / MAX: 79.53MIN: 17.75 / MAX: 27.22MIN: 30.4 / MAX: 162.87MIN: 29.78 / MAX: 38.82MIN: 21.52 / MAX: 28.171. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Meta Performance Per Watts

OpenBenchmarking.orgPerformance Per Watts, More Is BetterMeta Performance Per WattsPerformance Per WattsAMD 5950XCore i5 12600KCore i9 11900KRyzen 9 3950X60K120K180K240K300K277082.43178.42154.97229081.17

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X0.6391.2781.9172.5563.195SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 32.481.861.762.841.431.802.072.352.491. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X0.3780.7561.1341.5121.89SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.211.680.851.461.190.881.191.231.21MIN: 0.76 / MAX: 10.99MIN: 1.01 / MAX: 3.3MIN: 0.46 / MAX: 20.94MIN: 0.94 / MAX: 2.85MIN: 0.71 / MAX: 15.27MIN: 0.58 / MAX: 10.45MIN: 0.66 / MAX: 40MIN: 0.74 / MAX: 11.28MIN: 0.8 / MAX: 11.781. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X50100150200250SE +/- 0.84, N = 3SE +/- 0.34, N = 3SE +/- 0.54, N = 3221.12112.32117.92113.72172.93139.43201.66212.69206.08MIN: 158.14 / MAX: 257.35MIN: 95.55 / MAX: 163.62MIN: 88.86 / MAX: 191.18MIN: 95.52 / MAX: 207.66MIN: 85.04 / MAX: 215.98MIN: 115.02 / MAX: 161.07MIN: 112.84 / MAX: 328.9MIN: 170.16 / MAX: 263.41MIN: 119.55 / MAX: 247.841. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X30K60K90K120K150KSE +/- 72.13, N = 3SE +/- 72.51, N = 3SE +/- 164.88, N = 394252931856826813435270302790908677591757944781. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X0.62781.25561.88342.51123.139SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 32.411.861.762.791.431.782.052.372.451. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X5K10K15K20K25KSE +/- 72.04, N = 3SE +/- 15.98, N = 3SE +/- 40.08, N = 324180.3313656.4023848.8924283.3812657.3314815.5412874.8717110.5824162.821. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While WritingAMD 5950XCore i5 12600KCore i9 11900KRyzen 9 3950X900K1800K2700K3600K4500K40330212202188213048032936801. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Test: Read While Writing

Core i9 12900K: The test quit with a non-zero exit status.

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X0.84381.68762.53143.37524.219SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 33.692.372.913.752.002.382.953.353.731. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0AMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X246810SE +/- 0.059, N = 3SE +/- 0.004, N = 3SE +/- 0.059, N = 65.2594.1124.1114.2924.7603.8656.9856.9975.089MIN: 5.19 / MAX: 5.95MIN: 4.08 / MAX: 4.4MIN: 3.69 / MAX: 23.89MIN: 4.22 / MAX: 12.17MIN: 4.54 / MAX: 16.4MIN: 3.81 / MAX: 4.63MIN: 5.86 / MAX: 101.13MIN: 6.81 / MAX: 7.6MIN: 5.02 / MAX: 13.041. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyCore i5 12600KCore i9 11900KCore i9 12900K306090120150106.82122.7868.67

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X1632486480SE +/- 0.13, N = 3SE +/- 0.16, N = 3SE +/- 0.28, N = 345.0451.8361.2639.8370.3657.4257.4852.1544.86

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X48121620SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 312.949.7314.2910.0312.979.5116.4914.7012.89MIN: 10.73 / MAX: 18.83MIN: 8.72 / MAX: 14.88MIN: 7.22 / MAX: 64.94MIN: 8.24 / MAX: 19.8MIN: 7.48 / MAX: 28.4MIN: 7.04 / MAX: 20.4MIN: 8.83 / MAX: 60.21MIN: 12.18 / MAX: 25.15MIN: 6.96 / MAX: 19.371. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X5001000150020002500SE +/- 30.80, N = 3SE +/- 5.03, N = 3SE +/- 11.74, N = 32145.241670.461368.851591.251992.301668.032017.292346.052126.71MIN: 1771.66 / MAX: 2371.66MIN: 1651.45 / MAX: 1691.16MIN: 998.16 / MAX: 1637.84MIN: 1527.23 / MAX: 1706.84MIN: 1698.95 / MAX: 2200.09MIN: 1550.58 / MAX: 1755.3MIN: 1715.21 / MAX: 2220.91MIN: 1886.54 / MAX: 2632.54MIN: 1706.2 / MAX: 2453.341. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-OnlyCore i5 12600KCore i9 11900KCore i9 12900K300600900120015001235.981346.64791.41

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-OnlyCore i5 12600KCore i9 11900KCore i9 12900K70140210280350302.02329.47196.00

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-OnlyCore i5 12600KCore i9 11900KCore i9 12900K90180270360450374.46406.46242.03

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X7001400210028003500SE +/- 6.13, N = 3SE +/- 5.93, N = 3SE +/- 4.91, N = 33177.652135.312258.232082.332775.382206.442872.193345.243177.96MIN: 2198.09 / MAX: 3770.21MIN: 1959.59 / MAX: 2703.53MIN: 1701.31 / MAX: 2503.77MIN: 1755.41 / MAX: 2786.42MIN: 1785.91 / MAX: 3104.21MIN: 1950 / MAX: 2464.9MIN: 2247.77 / MAX: 3169.97MIN: 2326.56 / MAX: 3765.68MIN: 2488.17 / MAX: 3743.211. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-OnlyCore i5 12600KCore i9 11900KCore i9 12900K4080120160200153.91159.0499.83

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X7001400210028003500SE +/- 14.80, N = 3SE +/- 4.19, N = 3SE +/- 21.39, N = 33267.992130.592250.752102.262765.952228.902887.443335.633230.15MIN: 2219.86 / MAX: 3776.07MIN: 1945.31 / MAX: 2683.04MIN: 1599.29 / MAX: 2487.58MIN: 1758.45 / MAX: 2793.14MIN: 1842.04 / MAX: 3083.75MIN: 1559.98 / MAX: 2479.16MIN: 1479.87 / MAX: 3243.66MIN: 2730.21 / MAX: 3819.11MIN: 1558.47 / MAX: 3791.921. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv -flto -shared

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update RandomAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 9 3950X160K320K480K640K800K7646125987504934146042846184211. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write RandomAMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 9 3950X600K1200K1800K2400K3000K276490820300671930921298909522698551. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3AMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950XRyzen 9 5950X816243240SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.19, N = 626.0124.6427.0824.2829.6622.5533.2834.4825.79MIN: 25.72 / MAX: 30.52MIN: 24.59 / MAX: 27.92MIN: 24.59 / MAX: 77.44MIN: 24.01 / MAX: 37.99MIN: 28.81 / MAX: 42.55MIN: 22.23 / MAX: 33.17MIN: 28.54 / MAX: 154.79MIN: 33.34 / MAX: 42.31MIN: 25.32 / MAX: 32.811. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.2.2Bulk Size: 100 - Inserts: 1000 - Rounds: 30AMD 5950XCore i5 12600KCore i9 11900KCore i9 12900KRyzen 9 3950X2040608010069.3771.9274.1065.7284.601. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7Core i5 12600KCore i9 11900K0.0360.0720.1080.1440.180.160.131. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7Core i5 12600KCore i9 11900K0.01580.03160.04740.06320.0790.070.061. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: DefaultCore i5 12600KCore i9 11900K2468108.487.561. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5Core i5 12600KCore i9 11900K0.98781.97562.96343.95124.9394.393.951. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Core i5 12600KCore i9 11900K4812162014.2213.171. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionCore i5 12600KCore i9 11900K0.17780.35560.53340.71120.8890.790.751. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultCore i5 12600KCore i9 11900K51015202521.8620.911. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessCore i5 12600KCore i9 11900K0.44780.89561.34341.79122.2391.991.951. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionCore i5 12600KCore i9 11900K0.98781.97562.96343.95124.9394.324.391. (CC) gcc options: -fvisibility=hidden -O2 -lm

FLAC Audio Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACCore i5 12600KCore i9 11900K369121512.9612.781. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Lossless CompressionCore i5 12600KCore i9 11900K0.00450.0090.01350.0180.02250.020.021. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringAMD 5950XCore i5 12600KCore i9 11900KRyzen 7 5700GRyzen 7 5800XRyzen 9 3900XRyzen 9 3950X50100150200250Min: 26.67 / Avg: 113.2 / Max: 146.12Min: 7.81 / Avg: 99.45 / Max: 125.02Min: 15.85 / Avg: 185.84 / Max: 269.7Min: 0.19 / Avg: 46.1 / Max: 82.67Min: 5.58 / Avg: 115.26 / Max: 146.87Min: 19.42 / Avg: 117.82 / Max: 146.8Min: 15.22 / Avg: 107.77 / Max: 146.79

60 Results Shown

OpenVINO:
  Weld Porosity Detection FP16-INT8 - CPU
  Face Detection FP16-INT8 - CPU
  Vehicle Detection FP16 - CPU
Mobile Neural Network
OpenVINO:
  Face Detection FP16-INT8 - CPU
  Weld Porosity Detection FP16 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
Natron
Mobile Neural Network
OpenVINO
Mobile Neural Network
OpenVINO:
  Vehicle Detection FP16 - CPU
  Vehicle Detection FP16-INT8 - CPU
Facebook RocksDB
Mobile Neural Network
OpenVINO
7-Zip Compression
OpenVINO:
  Age Gender Recognition Retail 0013 FP16 - CPU
  Machine Translation EN To DE FP16 - CPU
Mobile Neural Network
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU
  Weld Porosity Detection FP16 - CPU
BRL-CAD
Mobile Neural Network
Meta Performance Per Watts
OpenVINO:
  Person Detection FP16 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
  Machine Translation EN To DE FP16 - CPU
7-Zip Compression
OpenVINO:
  Person Detection FP32 - CPU
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
Facebook RocksDB
OpenVINO
Mobile Neural Network
Blender
Timed PHP Compilation
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU
  Face Detection FP16 - CPU
Blender:
  Barbershop - CPU-Only
  Classroom - CPU-Only
  Pabellon Barcelona - CPU-Only
OpenVINO
Blender
OpenVINO
Facebook RocksDB:
  Update Rand
  Read Rand Write Rand
Mobile Neural Network
Apache CouchDB
WebP2 Image Encode:
  Quality 75, Compression Effort 7
  Quality 95, Compression Effort 7
  Default
  Quality 100, Compression Effort 5
WebP Image Encode:
  Quality 100
  Quality 100, Lossless, Highest Compression
  Default
  Quality 100, Lossless
  Quality 100, Highest Compression
FLAC Audio Encoding
WebP2 Image Encode
CPU Power Consumption Monitor