Raptor New Tests

Intel Core i5-13600K testing with a ASUS PRIME Z790-P WIFI (0602 BIOS) and AMD Radeon RX 6800 XT 16GB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2210274-PTS-RAPTORNE00&sro&grs.

Raptor New TestsProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolutioni9-13900K12Intel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (0602 BIOS)Intel Device 7a2732GB2000GB Samsung SSD 980 PRO 2TB + 2000GBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)Realtek ALC897ASUS VP28URealtek RTL8125 2.5GbE + Intel Device 7a70Ubuntu 22.046.0.0-060000rc1daily20220820-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.3.0-devel (git-4685385 2022-08-23 jammy-oibaf-ppa) (LLVM 14.0.6 DRM 3.48)1.3.224GCC 12.0.1 20220319ext43840x2160Intel Core i5-13600K @ 3.80GHz (14 Cores / 20 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- i9-13900K: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x10e- 1: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x10e- 2: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x10ePython Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Raptor New Testscpuminer-opt: Triple SHA-256, Onecoinquadray: 5 - 4Kcpuminer-opt: Quad SHA-256, Pyritequadray: 2 - 4Kquadray: 5 - 1080pquadray: 3 - 4Krocksdb: Rand Readquadray: 3 - 1080pquadray: 1 - 4Kcpuminer-opt: Magicpuminer-opt: Skeincoincpuminer-opt: LBC, LBRY Creditsopenvino: Face Detection FP16 - CPUquadray: 2 - 1080procksdb: Read While Writingcpuminer-opt: scryptcpuminer-opt: Ringcoinopenvino: Weld Porosity Detection FP16 - CPUcpuminer-opt: Deepcoinbrl-cad: VGR Performance Metricopenvino: Age Gender Recognition Retail 0013 FP16 - CPUcompress-7zip: Decompression Ratingopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUcpuminer-opt: Blake-2 Squadray: 1 - 1080pcpuminer-opt: x25xopenradioss: Rubber O-Ring Seal Installationopenradioss: INIVOL and Fluid Structure Interaction Drop Containerblender: BMW27 - CPU-Onlywebp2: Quality 100, Compression Effort 5tensorflow: CPU - 16 - GoogLeNetopenradioss: Cell Phone Drop Testwebp2: Quality 95, Compression Effort 7blender: Barbershop - CPU-Onlybuild-linux-kernel: allmodconfigspacy: en_core_web_trfblender: Fishy Cat - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyblender: Classroom - CPU-Onlytensorflow: CPU - 16 - ResNet-50webp2: Quality 75, Compression Effort 7openvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUtensorflow: CPU - 32 - ResNet-50xmrig: Wownero - 1Mtensorflow: CPU - 32 - GoogLeNetopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUbuild-nodejs: Time To Compiletensorflow: CPU - 16 - VGG-16tensorflow: CPU - 64 - AlexNetdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamopenvino: Machine Translation EN To DE FP16 - CPUdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streambuild-godot: Time To Compileopenvino: Face Detection FP16-INT8 - CPUopenfoam: drivaerFastback, Small Mesh Size - Execution Timedeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamtensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 256 - AlexNetbuild-linux-kernel: defconfigdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenradioss: Bird Strike on Windshieldtensorflow: CPU - 32 - AlexNetavifenc: 6build-mesa: Time To Compiletensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 32 - VGG-16tensorflow: CPU - 256 - ResNet-50deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamopenvino: Person Vehicle Bike Detection FP16 - CPUnatron: Spaceshipcompress-7zip: Compression Ratingavifenc: 0jpegxl-decode: Alltensorflow: CPU - 64 - VGG-16aom-av1: Speed 0 Two-Pass - Bosphorus 4Kopenradioss: Bumper Beamaom-av1: Speed 0 Two-Pass - Bosphorus 1080ptensorflow: CPU - 512 - AlexNetrocksdb: Read Rand Write Randavifenc: 2tensorflow: CPU - 256 - GoogLeNetopenvino: Vehicle Detection FP16 - CPUdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamtensorflow: CPU - 512 - GoogLeNetdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamtensorflow: CPU - 16 - AlexNetrocksdb: Update Randavifenc: 6, Losslesswebp2: Defaultwebp2: Quality 100, Lossless Compressiony-cruncher: 500Mopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timey-cruncher: 1Bxmrig: Monero - 1Maom-av1: Speed 4 Two-Pass - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 1080pbuild-php: Time To Compilebuild-wasmer: Time To Compilecpuminer-opt: Garlicoinopenfoam: drivaerFastback, Small Mesh Size - Mesh Timedeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeaom-av1: Speed 6 Two-Pass - Bosphorus 1080paom-av1: Speed 6 Two-Pass - Bosphorus 4Kbuild-erlang: Time To Compileaom-av1: Speed 4 Two-Pass - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080pdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streambuild-python: Defaultaom-av1: Speed 9 Realtime - Bosphorus 1080pavifenc: 10, Losslessdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenvino: Vehicle Detection FP16 - CPUaom-av1: Speed 8 Realtime - Bosphorus 1080pjpegxl: JPEG - 90jpegxl: PNG - 80jpegxl: JPEG - 80deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamjpegxl: PNG - 90deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streambuild-python: Released Build, PGO + LTO Optimizedwebp: Quality 100, Lossless, Highest Compressionopenvino: Weld Porosity Detection FP16-INT8 - CPUjpegxl: PNG - 100webp: Defaultwebp: Quality 100, Losslessencode-flac: WAV To FLACopenfoam: motorBike - Execution Timewebp: Quality 100, Highest Compressionjpegxl: JPEG - 100openfoam: motorBike - Mesh Timespacy: en_core_web_lgopenvino: Person Vehicle Bike Detection FP16 - CPUaom-av1: Speed 10 Realtime - Bosphorus 4Kwebp: Quality 100cpuminer-opt: Myriad-Groestljpegxl-decode: 1aom-av1: Speed 9 Realtime - Bosphorus 4Kopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16 - CPUaom-av1: Speed 6 Realtime - Bosphorus 4Kopenvino: Machine Translation EN To DE FP16 - CPUaom-av1: Speed 8 Realtime - Bosphorus 4Kopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Person Detection FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Person Detection FP16 - CPUi9-13900K124662800.852104203.513.362.8719910254911.1311.761156.08172790579905.4513.065840427354.265770.95497.612004049446415612.1714670835164.8982658043.71230.73102.59294.9547.519.98117.864.010.18541.77429.312256569.83167.16138.5240.30.373.813.7938.6517298.9113.2984.981728.49244.81311.15235.7513.045712.953368.44109.501850.01519.45148.02867162.594737.77257.1640.00953.7542173.62206.513.25921.088111.7911.337.8979.5487783.936.918616568.74497.7711.560.43108.541.27263.55388212933.625111.72112.418449.852880.504288.970711.239224.661440.54411390.746111.0193162.088201085.36115.400.0410.38180.6802522.3929570.912.176.4534.91730.8393616.1626.815401238.8117.730956.38021730.611767.2721.6658.25226.02162.5710.269397.342612.318159.463.363149.8876222.6054380.38138.431313.5313.1773.76513.31922.0204918.801831.151632.0948109.4439171.5810.9313.871.0725.372.3411.37660.88874.911.0424.92982038110.1986.415.931966070.1586.59409.691464.245.85116.7865.250.682081.791.538.1148.162066.932466700.471166801.981.921.641144838216.396.77670.56100240338003.187.653438785209.443413.94293.53118502952109307.928774521075.2849352026.14736.46171.09488.9478.476.0571.76103.450.11882.12697.8431582113.25270.6223.1324.950.232.422.3924.5211028.671.94627.411111.59377.0377.26153.628.54338.48544.8772.148175.84312.84222.54252107.943325.07171.2859.72436.1713257.53139.644.82131.19275.587.6625.7154.0744533.654.712735998.911348.528.060.3154.980.89185.01273225847.41579.6615.0917.002536.730258.8041122.43268.167533.459729.884282.82123.95338.0674119.216106827.27511.350.0313.823232.6219629.5327591.69.3484.7644.9739.162877.9733.652291190.497722.156545.12242123.796154.5217.6469.90121.6194.512.249481.612314.68190.033.992129.1379193.0444331.12158.5111.3711.8811.5764.811211.68815.2918809.371435.295528.327396.8101193.3160.8412.520.9722.792.1212.56366.57034.480.9527.2224186789.3693.8514.802082065.8592386.11540.8343.63111.3368.150.662039.011.57.9647.62036.942466800.471165401.981.921.641140723626.416.79667.22100270338003.187.663501623208.883403.16294.94118502947099322.448747220970.6651662026.24737.91171.03484.3578.286.0571.43104.760.11886.19697.2061580113.23270.68224.0224.950.232.392.3924.5210993.472.06627.641118.23379.957.23154.28.55858.531644.8772.297975.73812.84223.4123107.794925.05170.9559.79836.2911252.62139.54.80630.69375.627.6625.7454.2952533.784.812756899.636346.728.070.3153.010.91184.89272461047.6179.7115.0717.12336.156858.3914122.36318.172233.746229.630482.71122.92368.135119.076031297.28511.450.0313.696240.4011829.3987363.39.3699.0145.21539.1032867.4833.754873191.576221.811745.83562155.114554.6717.6570.30321.61192.6812.270281.474314.693187.653.991128.8544192.5365331.52155.1111.3811.8511.5564.708811.69809.4872808.518135.255828.359296.7018193.8580.8312.440.9622.792.1112.53866.91484.490.9527.0333188319.3593.7314.792115065.5791.91386.031543.4244111.2768.020.662037.931.57.9647.372045.22OpenBenchmarking.org

Cpuminer-Opt

Algorithm: Triple SHA-256, Onecoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, Onecoin12i9-13900K100K200K300K400K500K2466702466804662801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

QuadRay

Scene: 5 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4K12i9-13900K0.19130.38260.57390.76520.95650.470.470.851. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Cpuminer-Opt

Algorithm: Quad SHA-256, Pyrite

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, Pyrite12i9-13900K50K100K150K200K250K1166801165402104201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

QuadRay

Scene: 2 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4K12i9-13900K0.78981.57962.36943.15923.9491.981.983.511. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 5 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080p12i9-13900K0.7561.5122.2683.0243.781.921.923.361. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 3 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4K12i9-13900K0.64581.29161.93742.58323.2291.641.642.871. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Facebook RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Read12i9-13900K40M80M120M160M200M1144838211140723621991025491. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

QuadRay

Scene: 3 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080p12i9-13900K36912156.396.4111.131. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 1 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4K12i9-13900K36912156.776.7911.761. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Cpuminer-Opt

Algorithm: Magi

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Magi12i9-13900K2004006008001000670.56667.221156.081. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Skeincoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Skeincoin12i9-13900K40K80K120K160K200K1002401002701727901. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: LBC, LBRY Credits

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY Credits12i9-13900K12K24K36K48K60K3380033800579901. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPU12i9-13900K1.22632.45263.67894.90526.13153.183.185.451. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

QuadRay

Scene: 2 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080p12i9-13900K36912157.657.6613.061. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Facebook RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While Writing12i9-13900K1.3M2.6M3.9M5.2M6.5M3438785350162358404271. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Cpuminer-Opt

Algorithm: scrypt

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scrypt12i9-13900K80160240320400209.44208.88354.261. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Cpuminer-Opt

Algorithm: Ringcoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Ringcoin12i9-13900K120024003600480060003413.943403.165770.951. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPU12i9-13900K110220330440550293.53294.94497.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Cpuminer-Opt

Algorithm: Deepcoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Deepcoin12i9-13900K4K8K12K16K20K1185011850200401. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.6VGR Performance Metric12i9-13900K110K220K330K440K550K2952102947094944641. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU12i9-13900K3K6K9K12K15K9307.929322.4415612.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Rating12i9-13900K30K60K90K120K150K87745874721467081. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU12i9-13900K8K16K24K32K40K21075.2820970.6635164.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Cpuminer-Opt

Algorithm: Blake-2 S

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 S12i9-13900K200K400K600K800K1000K4935205166208265801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

QuadRay

Scene: 1 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080p12i9-13900K102030405026.1426.2443.701. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Cpuminer-Opt

Algorithm: x25x

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25x12i9-13900K30060090012001500736.46737.911230.731. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenRadioss

Model: Rubber O-Ring Seal Installation

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal Installation12i9-13900K4080120160200171.09171.03102.59

OpenRadioss

Model: INIVOL and Fluid Structure Interaction Drop Container

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Container12i9-13900K110220330440550488.94484.35294.95

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-Only12i9-13900K2040608010078.4778.2847.51

WebP2 Image Encode

Encode Settings: Quality 100, Compression Effort 5

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 512i9-13900K36912156.056.059.981. (CXX) g++ options: -O3 -march=native -mno-avx512f -fno-rtti -ldl

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNet12i9-13900K30609012015071.7671.43117.80

OpenRadioss

Model: Cell Phone Drop Test

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Test12i9-13900K20406080100103.45104.7664.01

WebP2 Image Encode

Encode Settings: Quality 95, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 712i9-13900K0.04050.0810.12150.1620.20250.110.110.181. (CXX) g++ options: -O3 -march=native -mno-avx512f -fno-rtti -ldl

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-Only12i9-13900K2004006008001000882.12886.19541.77

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfig12i9-13900K150300450600750697.84697.21429.31

spaCy

Model: en_core_web_trf

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trf12i9-13900K6001200180024003000158215802565

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-Only12i9-13900K306090120150113.25113.2369.83

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-Only12i9-13900K60120180240300270.60270.68167.16

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-Only12i9-13900K50100150200250223.13224.02138.52

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-5012i9-13900K91827364524.9524.9540.30

WebP2 Image Encode

Encode Settings: Quality 75, Compression Effort 7

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 712i9-13900K0.08330.16660.24990.33320.41650.230.230.371. (CXX) g++ options: -O3 -march=native -mno-avx512f -fno-rtti -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPU12i9-13900K0.85731.71462.57193.42924.28652.422.393.811. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPU12i9-13900K0.85281.70562.55843.41124.2642.392.393.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-5012i9-13900K91827364524.5224.5238.65

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1M12i9-13900K4K8K12K16K20K11028.610993.417298.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNet12i9-13900K30609012015071.9472.06113.20

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU12i9-13900K2004006008001000627.41627.64984.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU12i9-13900K4008001200160020001111.591118.231728.491. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Timed Node.js Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To Compile12i9-13900K80160240320400377.04379.95244.81

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: VGG-1612i9-13900K36912157.267.2311.15

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNet12i9-13900K50100150200250153.62154.20235.75

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream12i9-13900K36912158.54338.558513.0457

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream12i9-13900K36912158.48508.531612.9533

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPU12i9-13900K153045607544.8744.8768.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream12i9-13900K2040608010072.1572.30109.50

Timed Godot Game Engine Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile12i9-13900K2040608010075.8475.7450.02

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPU12i9-13900K51015202512.8412.8419.451. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Time12i9-13900K50100150200250222.54223.41148.031. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream12i9-13900K4080120160200107.94107.79162.59

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-5012i9-13900K91827364525.0725.0537.77

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNet12i9-13900K60120180240300171.28170.95257.16

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfig12i9-13900K132639526559.7259.8040.01

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream12i9-13900K122436486036.1736.2953.75

OpenRadioss

Model: Bird Strike on Windshield

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshield12i9-13900K60120180240300257.53252.62173.62

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNet12i9-13900K50100150200250139.64139.50206.51

libavif avifenc

Encoder Speed: 6

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 612i9-13900K1.08472.16943.25414.33885.42354.8214.8063.2591. (CXX) g++ options: -O3 -fPIC -lm

Timed Mesa Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile12i9-13900K71421283531.1930.6921.09

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNet12i9-13900K30609012015075.5875.62111.79

TensorFlow

Device: CPU - Batch Size: 32 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: VGG-1612i9-13900K36912157.667.6611.30

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-5012i9-13900K91827364525.7125.7437.89

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream12i9-13900K2040608010054.0754.3079.55

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU12i9-13900K2004006008001000533.65533.78783.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Natron

Input: Spaceship

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: Spaceship12i9-13900K2468104.74.86.9

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating12i9-13900K40K80K120K160K200K1273591275681861651. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

libavif avifenc

Encoder Speed: 0

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 012i9-13900K2040608010098.9199.6468.741. (CXX) g++ options: -O3 -fPIC -lm

JPEG XL Decoding libjxl

CPU Threads: All

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: All12i9-13900K110220330440550348.52346.72497.77

TensorFlow

Device: CPU - Batch Size: 64 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: VGG-1612i9-13900K36912158.068.0711.56

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K12i9-13900K0.09680.19360.29040.38720.4840.300.300.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenRadioss

Model: Bumper Beam

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beam12i9-13900K306090120150154.98153.01108.54

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p12i9-13900K0.28580.57160.85741.14321.4290.890.911.271. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNet12i9-13900K60120180240300185.01184.89263.55

Facebook RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write Random12i9-13900K800K1600K2400K3200K4000K2732258272461038821291. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

libavif avifenc

Encoder Speed: 2

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 212i9-13900K112233445547.4247.6133.631. (CXX) g++ options: -O3 -fPIC -lm

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNet12i9-13900K30609012015079.6679.71111.70

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPU12i9-13900K51015202515.0915.0721.00MIN: 11.84 / MAX: 25.69MIN: 11.77 / MAX: 25.06MIN: 11.86 / MAX: 36.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream12i9-13900K4812162017.0017.1212.42

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream12i9-13900K112233445536.7336.1649.85

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream12i9-13900K2040608010058.8058.3980.50

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream12i9-13900K306090120150122.43122.3688.97

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream12i9-13900K36912158.16758.172211.2392

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream12i9-13900K81624324033.4633.7524.66

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream12i9-13900K91827364529.8829.6340.54

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNet12i9-13900K30609012015082.8282.71113.00

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream12i9-13900K306090120150123.95122.9290.75

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream12i9-13900K36912158.06748.135011.0193

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNet12i9-13900K4080120160200119.21119.07162.08

Facebook RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update Random12i9-13900K200K400K600K800K1000K6106826031298201081. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

libavif avifenc

Encoder Speed: 6, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Lossless12i9-13900K2468107.2757.2855.3611. (CXX) g++ options: -O3 -fPIC -lm

WebP2 Image Encode

Encode Settings: Default

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Default12i9-13900K4812162011.3511.4515.401. (CXX) g++ options: -O3 -march=native -mno-avx512f -fno-rtti -ldl

WebP2 Image Encode

Encode Settings: Quality 100, Lossless Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Lossless Compression12i9-13900K0.0090.0180.0270.0360.0450.030.030.041. (CXX) g++ options: -O3 -march=native -mno-avx512f -fno-rtti -ldl

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500M12i9-13900K4812162013.8213.7010.38

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh Time12i9-13900K50100150200250232.62240.40180.681. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1B12i9-13900K71421283529.5329.4022.39

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1M12i9-13900K2K4K6K8K10K7591.67363.39570.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K12i9-13900K36912159.349.3612.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p12i9-13900K2040608010084.7699.0176.451. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Timed PHP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compile12i9-13900K102030405044.9745.2234.92

Timed Wasmer Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To Compile12i9-13900K91827364539.1639.1030.841. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Cpuminer-Opt

Algorithm: Garlicoin

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Garlicoin12i9-13900K80016002400320040002877.972867.483616.161. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Time12i9-13900K81624324033.6533.7526.821. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream12i9-13900K50100150200250190.50191.58238.81

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream12i9-13900K51015202522.1621.8117.73

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream12i9-13900K132639526545.1245.8456.38

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution Time12i9-13900K50010001500200025002123.802155.111730.611. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p12i9-13900K153045607554.5254.6767.271. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K12i9-13900K51015202517.6417.6521.661. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Timed Erlang/OTP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To Compile12i9-13900K163248648069.9070.3058.25

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p12i9-13900K61218243021.6021.6126.021. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p12i9-13900K4080120160200194.50192.68162.571. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream12i9-13900K369121512.2512.2710.27

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream12i9-13900K2040608010081.6181.4797.34

Timed CPython Compilation

Build Configuration: Default

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Default12i9-13900K4812162014.6814.6912.32

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p12i9-13900K4080120160200190.03187.65159.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

libavif avifenc

Encoder Speed: 10, Lossless

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Lossless12i9-13900K0.89821.79642.69463.59284.4913.9923.9913.3631. (CXX) g++ options: -O3 -fPIC -lm

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream12i9-13900K306090120150129.14128.85149.89

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream12i9-13900K50100150200250193.04192.54222.61

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPU12i9-13900K80160240320400331.12331.52380.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p12i9-13900K4080120160200158.51155.11138.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

JPEG XL libjxl

Input: JPEG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 9012i9-13900K369121511.3711.3813.001. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

JPEG XL libjxl

Input: PNG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 8012i9-13900K369121511.8811.8513.531. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

JPEG XL libjxl

Input: JPEG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 8012i9-13900K369121511.5711.5513.171. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream12i9-13900K163248648064.8164.7173.77

JPEG XL libjxl

Input: PNG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 9012i9-13900K369121511.6811.6913.311. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream12i9-13900K2004006008001000815.29809.49922.02

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream12i9-13900K2004006008001000809.37808.52918.80

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream12i9-13900K81624324035.3035.2631.15

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream12i9-13900K71421283528.3328.3632.09

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream12i9-13900K2040608010096.8196.70109.44

Timed CPython Compilation

Build Configuration: Released Build, PGO + LTO Optimized

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO Optimized12i9-13900K4080120160200193.32193.86171.58

WebP Image Encode

Encode Settings: Quality 100, Lossless, Highest Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest Compression12i9-13900K0.20930.41860.62790.83721.04650.840.830.931. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU12i9-13900K4812162012.5212.4413.87MIN: 8.61 / MAX: 29.44MIN: 8.58 / MAX: 29.49MIN: 6.82 / MAX: 27.531. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

JPEG XL libjxl

Input: PNG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 10012i9-13900K0.24080.48160.72240.96321.2040.970.961.071. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

WebP Image Encode

Encode Settings: Default

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Default12i9-13900K61218243022.7922.7925.371. (CC) gcc options: -fvisibility=hidden -O2 -lm

WebP Image Encode

Encode Settings: Quality 100, Lossless

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless12i9-13900K0.52651.0531.57952.1062.63252.122.112.341. (CC) gcc options: -fvisibility=hidden -O2 -lm

FLAC Audio Encoding

WAV To FLAC

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLAC12i9-13900K369121512.5612.5411.381. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

OpenFOAM

Input: motorBike - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: motorBike - Execution Time12i9-13900K153045607566.5766.9160.891. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

WebP Image Encode

Encode Settings: Quality 100, Highest Compression

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest Compression12i9-13900K1.10482.20963.31444.41925.5244.484.494.911. (CC) gcc options: -fvisibility=hidden -O2 -lm

JPEG XL libjxl

Input: JPEG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 10012i9-13900K0.2340.4680.7020.9361.170.950.951.041. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenFOAM

Input: motorBike - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: motorBike - Mesh Time12i9-13900K61218243027.2227.0324.931. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

spaCy

Model: en_core_web_lg

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lg12i9-13900K4K8K12K16K20K186781883120381

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU12i9-13900K36912159.369.3510.19MIN: 7.88 / MAX: 14.96MIN: 7.87 / MAX: 15.72MIN: 7.63 / MAX: 18.811. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K12i9-13900K2040608010093.8593.7386.401. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

WebP Image Encode

Encode Settings: Quality 100

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 10012i9-13900K4812162014.8014.7915.931. (CC) gcc options: -fvisibility=hidden -O2 -lm

Cpuminer-Opt

Algorithm: Myriad-Groestl

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-Groestl12i9-13900K5K10K15K20K25K2082021150196601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

JPEG XL Decoding libjxl

CPU Threads: 1

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 112i9-13900K163248648065.8565.5770.15

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K12i9-13900K2040608010092.0091.9186.591. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPU12i9-13900K90180270360450386.10386.03409.69MIN: 290.17 / MAX: 766.32MIN: 290.04 / MAX: 762.49MIN: 269.91 / MAX: 725.21. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPU12i9-13900K300600900120015001540.831543.421464.20MIN: 1492.04 / MAX: 1627.41MIN: 1479.1 / MAX: 1633.16MIN: 1395.26 / MAX: 1540.351. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K12i9-13900K102030405043.6344.0045.851. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPU12i9-13900K306090120150111.33111.27116.78MIN: 90.88 / MAX: 159.47MIN: 90.49 / MAX: 160.04MIN: 91.61 / MAX: 157.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K12i9-13900K153045607568.1568.0265.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU12i9-13900K0.1530.3060.4590.6120.7650.660.660.68MIN: 0.51 / MAX: 3.47MIN: 0.49 / MAX: 2.99MIN: 0.44 / MAX: 61. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPU12i9-13900K4008001200160020002039.012037.932081.79MIN: 1728.42 / MAX: 2654MIN: 1721.32 / MAX: 2656.11MIN: 1720.91 / MAX: 2619.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU12i9-13900K0.34430.68861.03291.37721.72151.501.501.53MIN: 1.15 / MAX: 2.9MIN: 1.17 / MAX: 2.7MIN: 1.13 / MAX: 3.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU12i9-13900K2468107.967.968.11MIN: 6.36 / MAX: 15.53MIN: 6.36 / MAX: 15.45MIN: 5.93 / MAX: 19.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPU12i9-13900K112233445547.6047.3748.16MIN: 24.24 / MAX: 64.51MIN: 24.26 / MAX: 64.6MIN: 37.48 / MAX: 63.31. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPU12i9-13900K4008001200160020002036.942045.222066.93MIN: 1725.41 / MAX: 2662.16MIN: 1759.13 / MAX: 2658.4MIN: 1688.42 / MAX: 2582.51. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared


Phoronix Test Suite v10.8.5