12400 nnn

Intel Core i5-12400 testing with a MSI PRO Z690-A WIFI DDR4(MS-7D25) v1.0 (Dasharo coreboot+UEFI v1.0.0 BIOS) and MSI Intel ADL-S GT1 14GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209127-NE-12400NNN983
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
September 11 2022
  1 Day, 7 Hours, 59 Minutes
B
September 11 2022
  1 Day, 7 Hours, 2 Minutes
C
September 12 2022
  1 Day, 6 Hours, 42 Minutes
Invert Behavior (Only Show Selected Data)
  1 Day, 7 Hours, 14 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


12400 nnnOpenBenchmarking.orgPhoronix Test SuiteIntel Core i5-12400 @ 5.60GHz (6 Cores / 12 Threads)MSI PRO Z690-A WIFI DDR4(MS-7D25) v1.0 (Dasharo coreboot+UEFI v1.0.0 BIOS)Intel Device 7aa716GBWestern Digital WD_BLACK SN750 SE 500GBMSI Intel ADL-S GT1 14GB (1450MHz)Realtek ALC897DELL S2409WIntel I225-V + Intel Device 7af0Ubuntu 22.045.15.0-40-generic (x86_64)GNOME Shell 42.2X Server + Wayland4.6 Mesa 22.0.11.2.204GCC 11.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution12400 Nnn BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x1f - Thermald 2.4.9 - Python 3.10.4- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCResult OverviewPhoronix Test Suite100%101%103%104%105%Apache CouchDBAircrack-ngFLAC Audio EncodingWebP2 Image EncodeNatronmemtier_benchmarkTimed Wasmer CompilationOpenVINOSVT-AV1ClickHouseC-BloscGraphicsMagickRedisFacebook RocksDBWebP Image EncodeLAMMPS Molecular Dynamics SimulatorUnpacking The Linux KernelTimed PHP CompilationMobile Neural NetworkInkscapeTimed Erlang/OTP CompilationTimed CPython CompilationsrsRANDragonflydb7-Zip CompressionNCNNBlenderBRL-CADUnvanquishedPrimesieveASTC EncoderTimed Node.js Compilation

12400 nnnwebp2: Quality 95, Compression Effort 7couchdb: 300 - 3000 - 30openvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUmemtier-benchmark: Redis - 50 - 1:1couchdb: 300 - 1000 - 30memtier-benchmark: Redis - 100 - 5:1memtier-benchmark: Redis - 100 - 1:1webp2: Quality 75, Compression Effort 7svt-av1: Preset 8 - Bosphorus 4Kmemtier-benchmark: Redis - 500 - 1:1redis: GET - 1000memtier-benchmark: Redis - 500 - 1:5memtier-benchmark: Redis - 500 - 1:10rocksdb: Seq Fillgraphics-magick: Rotatedragonflydb: 200 - 1:1memtier-benchmark: Redis - 500 - 5:1memtier-benchmark: Redis - 50 - 1:5rocksdb: Update Randgraphics-magick: Sharpenmemtier-benchmark: Redis - 50 - 1:10webp: Quality 100, Lossless, Highest Compressiongraphics-magick: HWB Color Spaceaircrack-ng: encode-flac: WAV To FLACsrsran: OFDM_Testsvt-av1: Preset 10 - Bosphorus 4Knatron: Spaceshipredis: GET - 50clickhouse: 100M Rows Web Analytics Dataset, Second Runrocksdb: Rand Fillncnn: CPU - resnet18clickhouse: 100M Rows Web Analytics Dataset, Third Runrocksdb: Read While Writingmemtier-benchmark: Redis - 100 - 1:5graphics-magick: Swirlgraphics-magick: Resizingwebp2: Quality 100, Compression Effort 5memtier-benchmark: Redis - 50 - 5:1graphics-magick: Enhancedopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUncnn: CPU-v3-v3 - mobilenet-v3openvino: Person Detection FP16 - CPUmemtier-benchmark: Redis - 100 - 1:10ncnn: CPU - shufflenet-v2build-wasmer: Time To Compilerocksdb: Rand Readdragonflydb: 50 - 1:1ncnn: CPU - mnasnetncnn: CPU-v2-v2 - mobilenet-v2openvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUdragonflydb: 50 - 5:1openvino: Person Detection FP32 - CPUblosc: blosclz bitshufflemnn: nasnetncnn: CPU - regnety_400mlammps: 20k Atomssrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMmnn: SqueezeNetV1.0ncnn: CPU - efficientnet-b0ncnn: CPU - googlenetrocksdb: Rand Fill Synccompress-7zip: Decompression Ratingredis: SET - 50build-python: Released Build, PGO + LTO Optimizedredis: SET - 1000srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMdragonflydb: 50 - 1:5couchdb: 100 - 1000 - 30ncnn: CPU - blazefacesvt-av1: Preset 12 - Bosphorus 1080pncnn: Vulkan GPU - squeezenet_ssdblosc: blosclz shuffleopenvino: Person Detection FP16 - CPUsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMdragonflydb: 200 - 1:5openvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUncnn: Vulkan GPU - vgg16mnn: resnet-v2-50redis: SET - 500svt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 4Kunpack-linux: linux-5.19.tar.xzncnn: CPU - yolov4-tinymnn: mobilenetV3dragonflydb: 200 - 5:1build-php: Time To Compilencnn: Vulkan GPU - yolov4-tinygraphics-magick: Noise-Gaussianclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacheblender: BMW27 - CPU-Onlyopenvino: Age Gender Recognition Retail 0013 FP16 - CPUbuild-python: Defaultsvt-av1: Preset 4 - Bosphorus 1080pncnn: CPU - squeezenet_ssdopenvino: Face Detection FP16 - CPUmnn: squeezenetv1.1svt-av1: Preset 10 - Bosphorus 1080psrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMncnn: CPU - mobilenetmnn: MobileNetV2_224couchdb: 100 - 3000 - 30ncnn: Vulkan GPU - blazefaceinkscape: SVG Files To PNGmnn: mobilenet-v1-1.0srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMopenvino: Face Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUwebp: Quality 100, Losslessbuild-erlang: Time To Compilencnn: CPU - FastestDetlammps: Rhodopsin Proteinncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU - FastestDetncnn: Vulkan GPU - resnet18openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUwebp2: Defaultblender: Fishy Cat - CPU-Onlymnn: inception-v3ncnn: CPU - vgg16rocksdb: Read Rand Write Randunvanquished: 1920 x 1080 - Highsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMblender: Classroom - CPU-Onlyncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - googlenetopenvino: Weld Porosity Detection FP16-INT8 - CPUncnn: Vulkan GPU - efficientnet-b0blender: Pabellon Barcelona - CPU-Onlycompress-7zip: Compression Ratingopenvino: Weld Porosity Detection FP16-INT8 - CPUprimesieve: 1e13ncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: CPU - vision_transformerbrl-cad: VGR Performance Metricsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMwebp: Quality 100, Highest Compressionsvt-av1: Preset 4 - Bosphorus 4Kncnn: CPU - alexnetprimesieve: 1e12ncnn: CPU - resnet50redis: GET - 500srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMbuild-nodejs: Time To Compilencnn: Vulkan GPU - alexnetsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMopenvino: Weld Porosity Detection FP16 - CPUunvanquished: 1920 x 1080 - Ultrawebp: Defaultopenvino: Weld Porosity Detection FP16 - CPUblender: Barbershop - CPU-Onlyopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUncnn: Vulkan GPU - shufflenet-v2openvino: Person Detection FP32 - CPUunvanquished: 1920 x 1080 - Mediumastcenc: Exhaustivencnn: Vulkan GPU-v2-v2 - mobilenet-v2webp: Quality 100ncnn: Vulkan GPU - vision_transformerastcenc: Fastastcenc: Mediumastcenc: Thoroughopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMwebp2: Quality 100, Lossless CompressionABC0.051077.45143.6927.822556576.02195.5552078165.772190464.390.1028.32471856.793450479.2523798342550490.8614108309242893948.932147254.392533010.016699941072394979.440.6390023816.45716.06114670000059.6242.54043025.5130.378411279.97126.6415610352371007.823897962.912216344.4716912.02332.452.571.162361490.322.7371.499464803893291442.712.543.14184.6321.633234764.71.128710.18.8027.666.345156.13.745.4110.313134399683061154.5245.3843036378.5441.83540006.6590.2860.82334.028492.0814819.23440.075003069817.747.13560.652831.9221.7523057410.7586.34886.7247.00521.551.0432780403.6669.99898.18217111.96172.851.1419.8835.25915.082579.072.358192.229448.412.242.195279.66649.0921.073.065494.51.545191.971.61110.9233.326.525640.197.27514.1212248.295.83242.625.74351.21569980205.8127.1478.06205.011362.23545.678.41293.29603.2453070712.15342.255266.42167.56200.29118824176.43.571.5178.128.66418.733896578.5144.8763.056537.52160.232.07108.917.80186.971939.29274.9114.54106.333480.06253.40.5585198.4911.487493.77118.61446.4186.13560.4973.30.010.051324.698118.4233.752182172.4168.2632053903.961906142.210.0930.9932260720.033515495.752388167.672652882.3514554369892956169.142291531.982437654.476613391012448790.70.6194422726.61316.80515300000061.6232.63893544.25125.6485784910.32129.4015287562290690.913768222.842191068.0917412.37323.112.61.132323372.492.7970.764468842943292038.332.593.2188.1421.233180042.721.148863.58.897.796.444158.53.7635.4910.463091405103047720246.623075085.754463504814.4189.1840.83338.081495.9914987.73450.77494.430638057.21554.462841.5221.5173090099.2586.89587.5366.93321.481.0372785310.8570.659903.06219111.76171.311.1419.7095.21315.022566.172.366192.991451.812.212.197281.69648.7421.1233.044492.91.545190.81.60111.6043.326.557639.1997.43515.3412208.425.82241.425.75651.021576451206.7127.6476.21205.141366.64545.928.43292.26604.152989710.39342.685266.93167.14200.59118454176.73.561.5178.0828.62518.713902966145.1762.626537.72160.332.11108.917.83186.711936.18275.0514.53106.463484.58253.20.5581198.3111.487491.53118.610546.42566.13640.4973.30.010.04134.9829.612256323.36172.7551789262.362207607.060.1030.9212291270.043218151.752590307.92437392.2415130599902768715.572179047.932383162.786322751012521142.280.6494123834.89516.13715220000062.082.63923159.5128.4787256710.34131.3215833162309526.133828062.932152632.2117212.13329.512.641.162380367.362.7672.283474093193356477.982.583.17185.4521.553177766.61.148812.38.9577.796.443157.63.7965.4510.413118403283088000248.5753070176.75447.33496653.7589.6970.83335.935497.7914905.83412.02499.93098427.567.18556.52863.5221.7353081719.2585.99687.6256.97621.331.0472807087.7170.16906.5219112.78172.271.1319.8015.24915.152557.912.377191.451449.312.32.211281.548.8120.9733.065491.31.555224.021.61111.0573.346.564642.8996.87516.9312274.195.85241.6825.86451.251577027206.2127.4477.49204.351367.26543.958.4292.89601.9953168712.78343.387267.27167.03200.92118684176.93.561.5218.128.59418.753894757.25145.1764.183536.66160.532.13109.117.82186.671937.31274.6714.55106.473483.55253.50.5579198.511.497496.91118.640446.42376.1360.4973.30.01OpenBenchmarking.org

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 95, Compression Effort 7CBA0.01130.02260.03390.04520.05650.040.050.051. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.2.2Bulk Size: 300 - Inserts: 3000 - Rounds: 30BA300600900120015001324.701077.451. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCBA306090120150134.98118.42143.691. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCBA81624324029.6133.7527.82MIN: 19.73 / MAX: 45.94MIN: 19.39 / MAX: 45.57MIN: 20.55 / MAX: 72.721. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1CBA500K1000K1500K2000K2500K2256323.362182172.402556576.021. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.2.2Bulk Size: 300 - Inserts: 1000 - Rounds: 30CBA4080120160200172.76168.26195.561. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1CBA400K800K1200K1600K2000K1789262.362053903.962078165.771. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1CBA500K1000K1500K2000K2500K2207607.061906142.212190464.391. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 75, Compression Effort 7CBA0.02250.0450.06750.090.11250.100.090.101. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KCBA71421283530.9230.9928.301. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1CBA500K1000K1500K2000K2500K2291270.042260720.032471856.791. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000CBA800K1600K2400K3200K4000K3218151.753515495.753450479.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5CBA600K1200K1800K2400K3000K2590307.902388167.672379834.001. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10CBA600K1200K1800K2400K3000K2437392.242652882.352550490.861. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Sequential FillCBA300K600K900K1200K1500K1513059145543614108301. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateCBA20040060080010009909899241. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:1CBA600K1200K1800K2400K3000K2768715.572956169.142893948.931. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1CBA500K1000K1500K2000K2500K2179047.932291531.982147254.391. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5CBA500K1000K1500K2000K2500K2383162.782437654.472533010.011. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update RandomCBA140K280K420K560K700K6322756613396699941. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenCBA204060801001011011071. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10CBA500K1000K1500K2000K2500K2521142.282448790.702394979.441. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionCBA0.1440.2880.4320.5760.720.640.610.631. (CC) gcc options: -fvisibility=hidden -O2 -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceCBA20040060080010009419449001. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7CBA5K10K15K20K25K23834.9022726.6123816.461. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

FLAC Audio Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACCBA4812162016.1416.8116.061. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestCBA30M60M90M120M150M1522000001530000001467000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KCBA142842567062.0861.6259.621. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipCBA0.5851.171.7552.342.9252.62.62.5

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50CBA900K1800K2700K3600K4500K3923159.503893544.254043025.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunCBA306090120150128.47125.64130.37MIN: 7.82 / MAX: 15000MIN: 8.76 / MAX: 8571.43MIN: 8.15 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random FillCBA200K400K600K800K1000K8725678578498411271. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18CBA369121510.3410.329.97MIN: 10.22 / MAX: 11.46MIN: 10.18 / MAX: 11.88MIN: 9.85 / MAX: 11.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunCBA306090120150131.32129.40126.64MIN: 5.96 / MAX: 30000MIN: 8.55 / MAX: 20000MIN: 8.39 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While WritingCBA300K600K900K1200K1500K1583316152875615610351. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5CBA500K1000K1500K2000K2500K2309526.132290690.912371007.821. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlCBA801602403204003823763891. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingCBA20040060080010008068227961. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Compression Effort 5CBA0.65931.31861.97792.63723.29652.932.842.911. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1CBA500K1000K1500K2000K2500K2152632.212191068.092216344.471. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedCBA40801201602001721741691. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUCBA369121512.1312.3712.02MIN: 10.83 / MAX: 55.09MIN: 10.85 / MAX: 59.04MIN: 10.83 / MAX: 59.91. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUCBA70140210280350329.51323.11332.451. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3CBA0.5941.1881.7822.3762.972.642.602.57MIN: 2.57 / MAX: 3.63MIN: 2.56 / MAX: 3.62MIN: 2.52 / MAX: 3.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUCBA0.2610.5220.7831.0441.3051.161.131.161. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10CBA500K1000K1500K2000K2500K2380367.362323372.492361490.321. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2CBA0.62781.25561.88342.51123.1392.762.792.73MIN: 2.73 / MAX: 4MIN: 2.75 / MAX: 3.84MIN: 2.7 / MAX: 3.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileCBA163248648072.2870.7671.501. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random ReadCBA10M20M30M40M50M4740931946884294464803891. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1CBA700K1400K2100K2800K3500K3356477.983292038.333291442.711. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetCBA0.58281.16561.74842.33122.9142.582.592.54MIN: 2.54 / MAX: 3.53MIN: 2.55 / MAX: 3.59MIN: 2.5 / MAX: 3.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2CBA0.721.442.162.883.63.173.203.14MIN: 3.11 / MAX: 4.22MIN: 3.13 / MAX: 4.26MIN: 3.08 / MAX: 4.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUCBA4080120160200185.45188.14184.63MIN: 166.51 / MAX: 234.4MIN: 168.53 / MAX: 283.55MIN: 166.48 / MAX: 2771. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUCBA51015202521.5521.2321.631. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1CBA700K1400K2100K2800K3500K3177766.603180042.723234764.701. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUCBA0.25650.5130.76951.0261.28251.141.141.121. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleCBA2K4K6K8K10K8812.38863.58710.11. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetCBA36912158.9578.8908.802MIN: 8.89 / MAX: 9.69MIN: 8.82 / MAX: 10.27MIN: 8.73 / MAX: 16.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mCBA2468107.797.797.66MIN: 7.7 / MAX: 8.86MIN: 7.69 / MAX: 8.98MIN: 7.57 / MAX: 8.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsCBA2468106.4436.4446.3451. (CXX) g++ options: -O3 -lm -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMCBA4080120160200157.6158.5156.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0CBA0.85411.70822.56233.41644.27053.7963.7633.740MIN: 3.73 / MAX: 4.39MIN: 3.68 / MAX: 3.98MIN: 3.67 / MAX: 4.031. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0CBA1.23532.47063.70594.94126.17655.455.495.41MIN: 5.38 / MAX: 6.62MIN: 5.41 / MAX: 6.75MIN: 5.34 / MAX: 6.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetCBA369121510.4110.4610.31MIN: 10.29 / MAX: 12.25MIN: 10.34 / MAX: 11.93MIN: 10.15 / MAX: 17.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Fill SyncCBA70014002100280035003118309131341. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingCBA9K18K27K36K45K4032840510399681. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50CBA700K1400K2100K2800K3500K3088000.03047720.03061154.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedCBA50100150200250248.58246.62245.38

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000CBA700K1400K2100K2800K3500K3070176.753075085.753036378.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMCBA100200300400500447.3446.0441.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5CBA800K1600K2400K3200K4000K3496653.753504814.413540006.651. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.2.2Bulk Size: 100 - Inserts: 1000 - Rounds: 30CBA2040608010089.7089.1890.291. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceCBA0.18680.37360.56040.74720.9340.830.830.82MIN: 0.81 / MAX: 1.04MIN: 0.82 / MAX: 1.01MIN: 0.8 / MAX: 1.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 1080pCBA70140210280350335.94338.08334.031. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: squeezenet_ssdCBA110220330440550497.79495.99492.08MIN: 474.87 / MAX: 575.57MIN: 469.78 / MAX: 577.85MIN: 470.25 / MAX: 542.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleCBA3K6K9K12K15K14905.814987.714819.21. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUCBA70014002100280035003412.023450.773440.07MIN: 3188.48 / MAX: 4927.3MIN: 3109.25 / MAX: 5914.12MIN: 3275.56 / MAX: 3921.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMCBA110220330440550499.9494.4500.01. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 1:5CBA700K1400K2100K2800K3500K3098427.563063805.003069817.741. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUCBA2468107.187.217.131. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUCBA120240360480600556.50554.46560.65MIN: 502.62 / MAX: 638.24MIN: 501.44 / MAX: 649.37MIN: 501.92 / MAX: 644.741. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vgg16CBA60012001800240030002863.522841.522831.92MIN: 2829.3 / MAX: 3181.6MIN: 2811.41 / MAX: 2910.81MIN: 2703.51 / MAX: 3289.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50CBA51015202521.7421.5221.75MIN: 21.36 / MAX: 28.59MIN: 21.2 / MAX: 29.16MIN: 21.38 / MAX: 22.911. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500CBA700K1400K2100K2800K3500K3081719.253090099.253057410.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 1080pCBA2040608010086.0086.9086.351. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KCBA2040608010087.6387.5486.721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel source tree package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzCBA2468106.9766.9337.005

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyCBA51015202521.3321.4821.55MIN: 21.15 / MAX: 21.69MIN: 21.23 / MAX: 21.95MIN: 21.35 / MAX: 21.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3CBA0.23560.47120.70680.94241.1781.0471.0371.043MIN: 1.03 / MAX: 1.85MIN: 1.02 / MAX: 1.89MIN: 1.03 / MAX: 1.851. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 200 - Set To Get Ratio: 5:1CBA600K1200K1800K2400K3000K2807087.712785310.852780403.661. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileCBA163248648070.1670.6669.99

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: yolov4-tinyCBA2004006008001000906.50903.06898.18MIN: 874.76 / MAX: 944MIN: 870.7 / MAX: 944.82MIN: 852.8 / MAX: 953.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianCBA501001502002502192192171. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheCBA306090120150112.78111.76111.96MIN: 6.92 / MAX: 15000MIN: 5.72 / MAX: 10000MIN: 7 / MAX: 120001. ClickHouse server version 22.5.4.19 (official build).

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-OnlyCBA4080120160200172.27171.31172.85

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUCBA0.25650.5130.76951.0261.28251.131.141.14MIN: 1.01 / MAX: 7.35MIN: 1.01 / MAX: 7.26MIN: 1.01 / MAX: 7.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultCBA51015202519.8019.7119.88

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 1080pCBA1.18332.36663.54994.73325.91655.2495.2135.2591. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdCBA4812162015.1515.0215.08MIN: 14.99 / MAX: 16.43MIN: 14.9 / MAX: 16.5MIN: 14.92 / MAX: 16.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCBA60012001800240030002557.912566.172579.07MIN: 2421.24 / MAX: 2865.14MIN: 2417.58 / MAX: 2783.8MIN: 2416.44 / MAX: 4596.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1CBA0.53481.06961.60442.13922.6742.3772.3662.358MIN: 2.32 / MAX: 2.67MIN: 2.31 / MAX: 2.55MIN: 2.31 / MAX: 2.571. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 1080pCBA4080120160200191.45192.99192.231. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMCBA100200300400500449.3451.8448.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetCBA369121512.3012.2112.24MIN: 12.11 / MAX: 12.65MIN: 12.05 / MAX: 12.51MIN: 12.05 / MAX: 12.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224CBA0.49750.9951.49251.992.48752.2112.1972.195MIN: 2.05 / MAX: 9.77MIN: 2.04 / MAX: 4.54MIN: 2.04 / MAX: 2.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.2.2Bulk Size: 100 - Inserts: 3000 - Rounds: 30CBA60120180240300281.50281.70279.671. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: blazefaceCBA112233445548.8148.7449.09MIN: 43.47 / MAX: 56.79MIN: 41.9 / MAX: 57.51MIN: 43.63 / MAX: 55.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Inkscape

Inkscape is an open-source vector graphics editor. This test profile times how long it takes to complete various operations by Inkscape. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterInkscapeOperation: SVG Files To PNGCBA51015202520.9721.1221.071. Inkscape 1.1.2 (0a00cf5339, 2022-02-04)

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0CBA0.68961.37922.06882.75843.4483.0653.0443.065MIN: 2.84 / MAX: 3.32MIN: 2.83 / MAX: 3.61MIN: 2.85 / MAX: 3.311. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMCBA110220330440550491.3492.9494.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCBA0.34880.69761.04641.39521.7441.551.541.541. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUCBA110022003300440055005224.025190.805191.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessCBA0.36230.72461.08691.44921.81151.611.601.611. (CC) gcc options: -fvisibility=hidden -O2 -lm

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To CompileCBA20406080100111.06111.60110.92

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetCBA0.75151.5032.25453.0063.75753.343.323.32MIN: 3.3 / MAX: 3.54MIN: 3.28 / MAX: 3.53MIN: 3.29 / MAX: 3.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinCBA2468106.5646.5576.5251. (CXX) g++ options: -O3 -lm -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mobilenetCBA140280420560700642.89639.19640.10MIN: 623.67 / MAX: 765.76MIN: 618.76 / MAX: 730.12MIN: 622.52 / MAX: 676.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: FastestDetCBA2040608010096.8797.4397.27MIN: 90.1 / MAX: 104.97MIN: 91.79 / MAX: 108.05MIN: 91.44 / MAX: 103.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet18CBA110220330440550516.93515.34514.12MIN: 493.25 / MAX: 551.72MIN: 489.32 / MAX: 542.62MIN: 485.62 / MAX: 550.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCBA3K6K9K12K15K12274.1912208.4212248.291. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: DefaultCBA1.31632.63263.94895.26526.58155.855.825.831. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-OnlyCBA50100150200250241.68241.40242.60

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3CBA61218243025.8625.7625.74MIN: 25.64 / MAX: 33.03MIN: 25.05 / MAX: 33.17MIN: 24.6 / MAX: 32.941. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16CBA122436486051.2551.0251.20MIN: 51 / MAX: 52.91MIN: 50.73 / MAX: 52.84MIN: 50.92 / MAX: 52.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write RandomCBA300K600K900K1200K1500K1577027157645115699801. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: HighCBA50100150200250206.2206.7205.8

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMCBA306090120150127.4127.6127.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-OnlyCBA100200300400500477.49476.21478.06

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mnasnetCBA50100150200250204.35205.14205.01MIN: 192.76 / MAX: 215.67MIN: 195.85 / MAX: 226.66MIN: 193.19 / MAX: 221.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet50CBA300600900120015001367.261366.641362.23MIN: 1330.65 / MAX: 1405.28MIN: 1340.24 / MAX: 1424.11MIN: 1325.99 / MAX: 1412.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: googlenetCBA120240360480600543.95545.92545.67MIN: 517.53 / MAX: 571.18MIN: 527.25 / MAX: 576.91MIN: 525.83 / MAX: 575.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUCBA2468108.408.438.41MIN: 7.47 / MAX: 48.35MIN: 7.47 / MAX: 49.77MIN: 7.46 / MAX: 48.891. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: efficientnet-b0CBA60120180240300292.89292.26293.29MIN: 274.99 / MAX: 320.88MIN: 276.12 / MAX: 312.66MIN: 274.15 / MAX: 323.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-OnlyCBA130260390520650601.99604.10603.24

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingCBA11K22K33K44K55K5316852989530701. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUCBA150300450600750712.78710.39712.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13CBA70140210280350343.39342.69342.261. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: regnety_400mCBA60120180240300267.27266.93266.42MIN: 260.17 / MAX: 282.31MIN: 259.65 / MAX: 277.8MIN: 259.19 / MAX: 280.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3CBA4080120160200167.03167.14167.56MIN: 158.39 / MAX: 178.86MIN: 157.61 / MAX: 177.49MIN: 157.98 / MAX: 177.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerCBA4080120160200200.92200.59200.29MIN: 199.03 / MAX: 207.08MIN: 198.98 / MAX: 206.57MIN: 198.86 / MAX: 206.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.6VGR Performance MetricCBA30K60K90K120K150K1186841184541188241. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMCBA4080120160200176.9176.7176.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionCBA0.80331.60662.40993.21324.01653.563.563.571. (CC) gcc options: -fvisibility=hidden -O2 -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KCBA0.34220.68441.02661.36881.7111.5211.5171.5171. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetCBA2468108.108.088.10MIN: 8.01 / MAX: 9.19MIN: 8.01 / MAX: 9.15MIN: 8 / MAX: 9.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12CBA71421283528.5928.6328.661. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50CBA51015202518.7518.7118.73MIN: 18.58 / MAX: 25.94MIN: 18.55 / MAX: 20.43MIN: 18.54 / MAX: 20.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500CBA800K1600K2400K3200K4000K3894757.253902966.003896578.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMCBA306090120150145.1145.1144.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To CompileCBA160320480640800764.18762.63763.06

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: alexnetCBA120240360480600536.66537.72537.52MIN: 526.49 / MAX: 573.49MIN: 523.15 / MAX: 586.13MIN: 526.68 / MAX: 590.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMCBA4080120160200160.5160.3160.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUCBA71421283532.1332.1132.07MIN: 28.62 / MAX: 99.35MIN: 28.55 / MAX: 100.34MIN: 28.53 / MAX: 85.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: UltraCBA20406080100109.1108.9108.9

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultCBA4812162017.8217.8317.801. (CC) gcc options: -fvisibility=hidden -O2 -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUCBA4080120160200186.67186.71186.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-OnlyCBA4008001200160020001937.311936.181939.29

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCBA60120180240300274.67275.05274.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCBA4812162014.5514.5314.54MIN: 12.97 / MAX: 25.07MIN: 13 / MAX: 58.73MIN: 13.06 / MAX: 59.41. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: shufflenet-v2CBA20406080100106.47106.46106.33MIN: 103.09 / MAX: 113.2MIN: 103.17 / MAX: 112.04MIN: 102.28 / MAX: 113.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUCBA70014002100280035003483.553484.583480.06MIN: 3230.95 / MAX: 4583.23MIN: 3311.7 / MAX: 3906.03MIN: 3148.94 / MAX: 5906.681. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Unvanquished

Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.53Resolution: 1920 x 1080 - Effects Quality: MediumCBA60120180240300253.5253.2253.4

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveCBA0.12570.25140.37710.50280.62850.55790.55810.55851. (CXX) g++ options: -O3 -flto -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2CBA4080120160200198.50198.31198.49MIN: 187.82 / MAX: 207.05MIN: 186.96 / MAX: 210.08MIN: 186.14 / MAX: 224.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100CBA369121511.4911.4811.481. (CC) gcc options: -fvisibility=hidden -O2 -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vision_transformerCBA160032004800640080007496.917491.537493.77MIN: 7397.5 / MAX: 7769.17MIN: 7393.35 / MAX: 7743.27MIN: 7376.2 / MAX: 7952.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastCBA306090120150118.64118.61118.611. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumCBA112233445546.4246.4346.421. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughCBA2468106.13606.13646.13561. (CXX) g++ options: -O3 -flto -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCBA0.11030.22060.33090.44120.55150.490.490.49MIN: 0.44 / MAX: 6.53MIN: 0.45 / MAX: 6.14MIN: 0.45 / MAX: 3.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMCBA163248648073.373.373.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP2 Image Encode 20220823Encode Settings: Quality 100, Lossless CompressionCBA0.00230.00460.00690.00920.01150.010.010.011. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl

172 Results Shown

WebP2 Image Encode
Apache CouchDB
OpenVINO:
  Vehicle Detection FP16 - CPU:
    FPS
    ms
memtier_benchmark
Apache CouchDB
memtier_benchmark:
  Redis - 100 - 5:1
  Redis - 100 - 1:1
WebP2 Image Encode
SVT-AV1
memtier_benchmark
Redis
memtier_benchmark:
  Redis - 500 - 1:5
  Redis - 500 - 1:10
Facebook RocksDB
GraphicsMagick
Dragonflydb
memtier_benchmark:
  Redis - 500 - 5:1
  Redis - 50 - 1:5
Facebook RocksDB
GraphicsMagick
memtier_benchmark
WebP Image Encode
GraphicsMagick
Aircrack-ng
FLAC Audio Encoding
srsRAN
SVT-AV1
Natron
Redis
ClickHouse
Facebook RocksDB
NCNN
ClickHouse
Facebook RocksDB
memtier_benchmark
GraphicsMagick:
  Swirl
  Resizing
WebP2 Image Encode
memtier_benchmark
GraphicsMagick
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
NCNN
OpenVINO
memtier_benchmark
NCNN
Timed Wasmer Compilation
Facebook RocksDB
Dragonflydb
NCNN:
  CPU - mnasnet
  CPU-v2-v2 - mobilenet-v2
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
Dragonflydb
OpenVINO
C-Blosc
Mobile Neural Network
NCNN
LAMMPS Molecular Dynamics Simulator
srsRAN
Mobile Neural Network
NCNN:
  CPU - efficientnet-b0
  CPU - googlenet
Facebook RocksDB
7-Zip Compression
Redis
Timed CPython Compilation
Redis
srsRAN
Dragonflydb
Apache CouchDB
NCNN
SVT-AV1
NCNN
C-Blosc
OpenVINO
srsRAN
Dragonflydb
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    FPS
    ms
NCNN
Mobile Neural Network
Redis
SVT-AV1:
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 4K
Unpacking The Linux Kernel
NCNN
Mobile Neural Network
Dragonflydb
Timed PHP Compilation
NCNN
GraphicsMagick
ClickHouse
Blender
OpenVINO
Timed CPython Compilation
SVT-AV1
NCNN
OpenVINO
Mobile Neural Network
SVT-AV1
srsRAN
NCNN
Mobile Neural Network
Apache CouchDB
NCNN
Inkscape
Mobile Neural Network
srsRAN
OpenVINO:
  Face Detection FP16 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
WebP Image Encode
Timed Erlang/OTP Compilation
NCNN
LAMMPS Molecular Dynamics Simulator
NCNN:
  Vulkan GPU - mobilenet
  Vulkan GPU - FastestDet
  Vulkan GPU - resnet18
OpenVINO
WebP2 Image Encode
Blender
Mobile Neural Network
NCNN
Facebook RocksDB
Unvanquished
srsRAN
Blender
NCNN:
  Vulkan GPU - mnasnet
  Vulkan GPU - resnet50
  Vulkan GPU - googlenet
OpenVINO
NCNN
Blender
7-Zip Compression
OpenVINO
Primesieve
NCNN:
  Vulkan GPU - regnety_400m
  Vulkan GPU-v3-v3 - mobilenet-v3
  CPU - vision_transformer
BRL-CAD
srsRAN
WebP Image Encode
SVT-AV1
NCNN
Primesieve
NCNN
Redis
srsRAN
Timed Node.js Compilation
NCNN
srsRAN
OpenVINO
Unvanquished
WebP Image Encode
OpenVINO
Blender
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
NCNN
OpenVINO
Unvanquished
ASTC Encoder
NCNN
WebP Image Encode
NCNN
ASTC Encoder:
  Fast
  Medium
  Thorough
OpenVINO
srsRAN
WebP2 Image Encode