new xeon

Intel Xeon Gold 6421N testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2307311-NE-NEWXEON6232&grw&sro.

new xeonProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionabIntel Xeon Gold 6421N @ 3.60GHz (32 Cores / 64 Threads)Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS)Intel Device 1bce512GB3 x 3841GB Micron_9300_MTFDHAL3T8TDPASPEEDVGA HDMI4 x Intel E810-C for QSFPUbuntu 22.045.15.0-47-generic (x86_64)GNOME Shell 42.4X Server 1.21.1.31.2.204GCC 11.2.0ext41600x1200OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0 Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

new xeonheffte: c2c - Stock - double - 128heffte: c2c - Stock - double - 512heffte: r2c - FFTW - double - 512heffte: r2c - Stock - float - 256heffte: c2c - Stock - float - 256heffte: r2c - FFTW - double - 128laghos: Triple Point Problemlibxsmm: 32libxsmm: 64libxsmm: 256libxsmm: 128stress-ng: Hashstress-ng: MMAPheffte: r2c - Stock - double - 256laghos: Sedov Blast Wave, ube_922_hex.meshpalabos: 100heffte: c2c - FFTW - float - 128heffte: c2c - Stock - float - 128heffte: c2c - FFTW - float - 256heffte: c2c - Stock - float - 512heffte: c2c - FFTW - float - 512heffte: r2c - FFTW - double - 256heffte: r2c - FFTW - float - 128heffte: r2c - Stock - float - 128heffte: r2c - FFTW - float - 256heffte: r2c - Stock - float - 512heffte: r2c - FFTW - float - 512heffte: c2c - Stock - double - 256heffte: c2c - FFTW - double - 128heffte: r2c - Stock - double - 128heffte: c2c - FFTW - double - 256heffte: r2c - Stock - double - 512heffte: c2c - FFTW - double - 512palabos: 400palabos: 500stress-ng: NUMAstress-ng: Pipestress-ng: Pollstress-ng: Zlibstress-ng: Futexstress-ng: MEMFDstress-ng: Mutexstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: Cloningstress-ng: Forkingstress-ng: Pthreadstress-ng: AVL Treestress-ng: IO_uringstress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: Function Callstress-ng: x86_64 RdRandstress-ng: Floating Pointstress-ng: Matrix 3D Mathstress-ng: Memory Copyingstress-ng: Vector Shufflestress-ng: Socket Activitystress-ng: Wide Vector Mathstress-ng: Context Switchingstress-ng: Fused Multiply-Addstress-ng: Vector Floating Pointstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passingbrl-cad: VGR Performance Metricdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamhpcg: 104 104 104 - 60hpcg: 144 144 144 - 60hpcg: 160 160 160 - 60openfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timeopenfoam: drivaerFastback, Medium Mesh Size - Execution Timebuild-gdb: Time To Compilebuild-llvm: Ninjabuild-llvm: Unix Makefilesbuild-php: Time To Compilebuild-linux-kernel: defconfigbuild-linux-kernel: allmodconfigblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyvvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fasterliquid-dsp: 16 - 256 - 32liquid-dsp: 16 - 256 - 57liquid-dsp: 32 - 256 - 32liquid-dsp: 32 - 256 - 57liquid-dsp: 64 - 256 - 32liquid-dsp: 64 - 256 - 57liquid-dsp: 16 - 256 - 512liquid-dsp: 32 - 256 - 512liquid-dsp: 64 - 256 - 512srsran: Downlink Processor Benchmarksrsran: PUSCH Processor Benchmark, Throughput Totalsrsran: PUSCH Processor Benchmark, Throughput Threadapache-iotdb: 100 - 1 - 200apache-iotdb: 100 - 1 - 200apache-iotdb: 100 - 1 - 500apache-iotdb: 100 - 1 - 500apache-iotdb: 200 - 1 - 200apache-iotdb: 200 - 1 - 200apache-iotdb: 200 - 1 - 500apache-iotdb: 200 - 1 - 500apache-iotdb: 500 - 1 - 200apache-iotdb: 500 - 1 - 200apache-iotdb: 500 - 1 - 500apache-iotdb: 500 - 1 - 500apache-iotdb: 100 - 100 - 200apache-iotdb: 100 - 100 - 200apache-iotdb: 100 - 100 - 500apache-iotdb: 100 - 100 - 500apache-iotdb: 200 - 100 - 200apache-iotdb: 200 - 100 - 200apache-iotdb: 200 - 100 - 500apache-iotdb: 200 - 100 - 500apache-iotdb: 500 - 100 - 200apache-iotdb: 500 - 100 - 200apache-iotdb: 500 - 100 - 500apache-iotdb: 500 - 100 - 500memtier-benchmark: Redis - 50 - 1:5memtier-benchmark: Redis - 100 - 1:5memtier-benchmark: Redis - 50 - 1:10memtier-benchmark: Redis - 100 - 1:10cassandra: Writesab46.635740.743874.4734157.86775.0892121.794177.78440.0833.8879.61211.85577252.32861.2876.9042216.86235.186131.65685.739876.029972.560978.829172.2893207.244149.935149.825137.536141.4138.961364.426392.397338.930476.611043.9665287.268300.276390.8735837711.853669281.692647.811541676.36549.9415147444.51133.8350240.0999373474.319740.5789918.21136846.01294.261529665.98582724.631537111.2064111.1162126446.21160653.44151386.3122028.03331416.5210587.489599.937176.19167204.2124947.141745029.272572801.7534197705.6358243.3826067360.60696.655852281.7146668634.5311460.78181074.821814.8600390.907640.9109121.6893131.4497479.787633.32783227.09544.9416208.897576.559735.1529453.4802478.910833.3894208.847176.5807295.829554.067446.3307345.1491504.611431.6750137.3780116.376133.9358468.804627.780827.421327.508627.96521467.707331144.69646615.9907441.905263.154323.85642.35140.438445.38547.15127.7864.07493.45159.945.84211.02016.10030.946557945000848435000847085000132810000015773000001728850000243940000383555000513135000705.85372.9240.4710382.4414.581191500.8828.271045806.8111.861505080.3426.291576432.259.491916642.922.9743074031.8431.8359041436.6469.0854224351.129.5445677447.24101.2556894390.6131.5867607191.6468.342211638.652285996.172316281.262447092.0115562649.523040.664874.7148164.04774.9286122.460176.92444.6839.9758.91225.05583978.14856.1477.0345217.19234.874130.98285.485075.300172.539178.960572.1981206.217151.803154.053137.740141.19338.675762.297490.985138.518276.604144.0064285.761300.855392.0836852791.123671617.972648.811492979.46549.5515192892.59132.6150243.4899251227.289326.0989966.29136709.81294.661503623.79598173.561885833.1164118.8761651485.43156668.43151431.1522106.49331423.0410601.109605.307180.43167202.0725282.311750003.432571092.6934050669.2358232.7026125214.84696.925854201.7834.5539460.75881075.957114.8473391.912540.8061122.0367131.0664480.522333.27813233.95884.9312208.990876.468437.3253428.6695479.224133.3680211.227075.7218299.927753.329146.5491343.5170505.130931.6488143.4387111.497634.5447460.670727.840527.389027.397827.94871767.563163144.93674615.4601842.006262.884319.85242.38240.451445.38047.22127.7664.01493.615.91710.99216.24930.927558655000862195000847675000132390000015768500001733700000248820000378650000513040000710.95543.7236.3697217.5514.981185338.0228.451042859.0312.181469808.8926.641521587.49.872009050.4621.6334191814.8643.8656018457.8773.5651199962.1131.6346726912.4698.8756137174.731.6965935725.6768.012217192.122227152.022293467.622304730.19OpenBenchmarking.org

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: Stock - Precision: double - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double - X Y Z: 128ab1122334455SE +/- 0.26, N = 2SE +/- 3.39, N = 246.6449.521. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: Stock - Precision: double - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double - X Y Z: 512ab918273645SE +/- 0.05, N = 2SE +/- 0.00, N = 240.7440.661. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: FFTW - Precision: double - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 512ab20406080100SE +/- 0.48, N = 2SE +/- 0.16, N = 274.4774.711. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: Stock - Precision: float - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: float - X Y Z: 256ab4080120160200SE +/- 6.51, N = 2SE +/- 3.21, N = 2157.87164.051. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: Stock - Precision: float - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: float - X Y Z: 256ab20406080100SE +/- 0.48, N = 2SE +/- 0.10, N = 275.0974.931. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: FFTW - Precision: double - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 128ab306090120150SE +/- 0.56, N = 2SE +/- 1.22, N = 2121.79122.461. (CXX) g++ options: -O3

Laghos

Test: Triple Point Problem

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Triple Point Problemab4080120160200SE +/- 0.13, N = 2SE +/- 0.02, N = 2177.78176.921. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

libxsmm

M N K: 32

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32ab100200300400500SE +/- 0.25, N = 2SE +/- 0.15, N = 2440.0444.61. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

libxsmm

M N K: 64

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64ab2004006008001000SE +/- 1.05, N = 2SE +/- 0.20, N = 2833.8839.91. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

libxsmm

M N K: 256

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 256ab2004006008001000SE +/- 0.65, N = 2SE +/- 5.75, N = 2879.6758.91. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

libxsmm

M N K: 128

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128ab30060090012001500SE +/- 4.60, N = 2SE +/- 1.10, N = 21211.81225.01. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

Stress-NG

Test: Hash

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Hashab1.2M2.4M3.6M4.8M6MSE +/- 3166.95, N = 2SE +/- 2865.25, N = 25577252.325583978.141. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: MMAP

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: MMAPab2004006008001000SE +/- 3.32, N = 2SE +/- 2.06, N = 2861.28856.141. (CXX) g++ options: -O2 -std=gnu99 -lc

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: Stock - Precision: double - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double - X Y Z: 256ab20406080100SE +/- 0.40, N = 2SE +/- 0.65, N = 276.9077.031. (CXX) g++ options: -O3

Laghos

Test: Sedov Blast Wave, ube_922_hex.mesh

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Sedov Blast Wave, ube_922_hex.meshab50100150200250SE +/- 0.24, N = 2SE +/- 0.18, N = 2216.86217.191. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

Palabos

Grid Size: 100

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 100ab50100150200250SE +/- 0.02, N = 2SE +/- 0.34, N = 2235.19234.871. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: FFTW - Precision: float - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 128ab306090120150SE +/- 0.77, N = 2SE +/- 0.61, N = 2131.66130.981. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: Stock - Precision: float - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: float - X Y Z: 128ab20406080100SE +/- 1.30, N = 2SE +/- 0.88, N = 285.7485.491. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: FFTW - Precision: float - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 256ab20406080100SE +/- 0.70, N = 2SE +/- 0.08, N = 276.0375.301. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: Stock - Precision: float - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: float - X Y Z: 512ab1632486480SE +/- 0.21, N = 2SE +/- 0.00, N = 272.5672.541. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: FFTW - Precision: float - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 512ab20406080100SE +/- 0.36, N = 2SE +/- 0.06, N = 278.8378.961. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: FFTW - Precision: double - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 256ab1632486480SE +/- 0.44, N = 2SE +/- 0.12, N = 272.2972.201. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: FFTW - Precision: float - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 128ab50100150200250SE +/- 0.61, N = 2SE +/- 0.19, N = 2207.24206.221. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: Stock - Precision: float - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: float - X Y Z: 128ab306090120150SE +/- 1.93, N = 2SE +/- 1.24, N = 2149.94151.801. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: FFTW - Precision: float - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 256ab306090120150SE +/- 3.76, N = 2SE +/- 1.59, N = 2149.83154.051. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: Stock - Precision: float - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: float - X Y Z: 512ab306090120150SE +/- 0.00, N = 2SE +/- 0.33, N = 2137.54137.741. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: FFTW - Precision: float - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 512ab306090120150SE +/- 0.63, N = 2SE +/- 0.20, N = 2141.41141.191. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: Stock - Precision: double - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double - X Y Z: 256ab918273645SE +/- 0.07, N = 2SE +/- 0.07, N = 238.9638.681. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: FFTW - Precision: double - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 128ab1428425670SE +/- 2.73, N = 2SE +/- 2.41, N = 264.4362.301. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: Stock - Precision: double - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double - X Y Z: 128ab20406080100SE +/- 0.90, N = 2SE +/- 0.09, N = 292.4090.991. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: FFTW - Precision: double - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 256ab918273645SE +/- 0.25, N = 2SE +/- 0.16, N = 238.9338.521. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: Stock - Precision: double - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double - X Y Z: 512ab20406080100SE +/- 0.01, N = 2SE +/- 0.11, N = 276.6176.601. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: FFTW - Precision: double - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 512ab1020304050SE +/- 0.04, N = 2SE +/- 0.02, N = 243.9744.011. (CXX) g++ options: -O3

Palabos

Grid Size: 400

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 400ab60120180240300SE +/- 0.49, N = 2SE +/- 1.54, N = 2287.27285.761. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

Palabos

Grid Size: 500

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 500ab70140210280350SE +/- 1.63, N = 2SE +/- 1.17, N = 2300.28300.861. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

Stress-NG

Test: NUMA

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: NUMAab90180270360450SE +/- 0.88, N = 2SE +/- 0.05, N = 2390.87392.081. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Pipe

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pipeab8M16M24M32M40MSE +/- 1105250.10, N = 2SE +/- 79631.10, N = 235837711.8536852791.121. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Poll

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pollab800K1600K2400K3200K4000KSE +/- 2536.76, N = 2SE +/- 1953.54, N = 23669281.693671617.971. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Zlib

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Zlibab6001200180024003000SE +/- 0.06, N = 2SE +/- 0.65, N = 22647.812648.811. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Futex

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Futexab300K600K900K1200K1500KSE +/- 56630.43, N = 2SE +/- 45385.58, N = 21541676.361492979.461. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: MEMFD

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: MEMFDab120240360480600SE +/- 1.31, N = 2SE +/- 1.20, N = 2549.94549.551. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Mutex

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Mutexab3M6M9M12M15MSE +/- 23940.47, N = 2SE +/- 2864.48, N = 215147444.5115192892.591. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Atomic

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Atomicab306090120150SE +/- 1.05, N = 2SE +/- 0.20, N = 2133.83132.611. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Crypto

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cryptoab11K22K33K44K55KSE +/- 3.65, N = 2SE +/- 18.13, N = 250240.0950243.481. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Malloc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Mallocab20M40M60M80M100MSE +/- 129754.02, N = 2SE +/- 83929.32, N = 299373474.3199251227.281. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Cloning

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cloningab2K4K6K8K10KSE +/- 114.33, N = 2SE +/- 100.16, N = 29740.579326.091. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Forking

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Forkingab20K40K60K80K100KSE +/- 469.20, N = 2SE +/- 421.24, N = 289918.2189966.291. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pthreadab30K60K90K120K150KSE +/- 971.78, N = 2SE +/- 102.07, N = 2136846.01136709.811. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: AVL Tree

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: AVL Treeab60120180240300SE +/- 0.32, N = 2SE +/- 0.85, N = 2294.26294.661. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: IO_uring

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: IO_uringab300K600K900K1200K1500KSE +/- 22482.34, N = 2SE +/- 5229.94, N = 21529665.981503623.791. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: SENDFILE

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: SENDFILEab130K260K390K520K650KSE +/- 6799.74, N = 2SE +/- 243.97, N = 2582724.63598173.561. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: CPU Cache

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: CPU Cacheab400K800K1200K1600K2000KSE +/- 31294.95, N = 2SE +/- 234949.06, N = 21537111.201885833.111. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: CPU Stress

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: CPU Stressab14K28K42K56K70KSE +/- 12.73, N = 2SE +/- 38.95, N = 264111.1164118.871. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Semaphores

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Semaphoresab13M26M39M52M65MSE +/- 2077286.42, N = 2SE +/- 466593.23, N = 262126446.2161651485.431. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Matrix Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix Mathab30K60K90K120K150KSE +/- 2867.57, N = 2SE +/- 332.46, N = 2160653.44156668.431. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Vector Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Mathab30K60K90K120K150KSE +/- 47.16, N = 2SE +/- 5.98, N = 2151386.31151431.151. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Function Call

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Function Callab5K10K15K20K25KSE +/- 80.03, N = 2SE +/- 74.09, N = 222028.0322106.491. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: x86_64 RdRand

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: x86_64 RdRandab70K140K210K280K350KSE +/- 2.35, N = 2SE +/- 1.14, N = 2331416.52331423.041. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Floating Point

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Floating Pointab2K4K6K8K10KSE +/- 1.07, N = 2SE +/- 17.77, N = 210587.4810601.101. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Matrix 3D Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix 3D Mathab2K4K6K8K10KSE +/- 34.45, N = 2SE +/- 4.08, N = 29599.939605.301. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Memory Copying

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Memory Copyingab15003000450060007500SE +/- 8.71, N = 2SE +/- 11.04, N = 27176.197180.431. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Vector Shuffle

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Shuffleab40K80K120K160K200KSE +/- 6.63, N = 2SE +/- 6.04, N = 2167204.21167202.071. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Socket Activity

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Socket Activityab5K10K15K20K25KSE +/- 72.57, N = 2SE +/- 267.39, N = 224947.1425282.311. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Wide Vector Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Wide Vector Mathab400K800K1200K1600K2000KSE +/- 918.08, N = 2SE +/- 4139.63, N = 21745029.271750003.431. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Context Switching

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Context Switchingab600K1200K1800K2400K3000KSE +/- 678.57, N = 2SE +/- 604.17, N = 22572801.752571092.691. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Fused Multiply-Add

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Fused Multiply-Addab7M14M21M28M35MSE +/- 137631.48, N = 2SE +/- 285.63, N = 234197705.6334050669.231. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Vector Floating Point

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Floating Pointab12K24K36K48K60KSE +/- 30.71, N = 2SE +/- 4.11, N = 258243.3858232.701. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Glibc C String Functions

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Glibc C String Functionsab6M12M18M24M30MSE +/- 150617.25, N = 2SE +/- 69329.81, N = 226067360.6026125214.841. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Glibc Qsort Data Sorting

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Glibc Qsort Data Sortingab150300450600750SE +/- 0.40, N = 2SE +/- 0.46, N = 2696.65696.921. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: System V Message Passing

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: System V Message Passingab1.3M2.6M3.9M5.2M6.5MSE +/- 7174.98, N = 2SE +/- 9802.94, N = 25852281.715854201.781. (CXX) g++ options: -O2 -std=gnu99 -lc

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metrica100K200K300K400K500KSE +/- 3768.50, N = 24666861. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamab816243240SE +/- 0.06, N = 2SE +/- 0.12, N = 234.5334.55

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamab100200300400500SE +/- 0.42, N = 2SE +/- 2.44, N = 2460.78460.76

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamab2004006008001000SE +/- 0.57, N = 2SE +/- 1.01, N = 21074.821075.96

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamab48121620SE +/- 0.01, N = 2SE +/- 0.01, N = 214.8614.85

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamab90180270360450SE +/- 1.01, N = 2SE +/- 0.12, N = 2390.91391.91

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamab918273645SE +/- 0.11, N = 2SE +/- 0.01, N = 240.9140.81

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 0.05, N = 2SE +/- 0.22, N = 2121.69122.04

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 0.05, N = 2SE +/- 0.22, N = 2131.45131.07

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab100200300400500SE +/- 0.12, N = 2SE +/- 0.54, N = 2479.79480.52

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab816243240SE +/- 0.01, N = 2SE +/- 0.04, N = 233.3333.28

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamab7001400210028003500SE +/- 8.40, N = 2SE +/- 3.51, N = 23227.103233.96

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamab1.11192.22383.33574.44765.5595SE +/- 0.0128, N = 2SE +/- 0.0056, N = 24.94164.9312

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamab50100150200250SE +/- 0.10, N = 2SE +/- 0.05, N = 2208.90208.99

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamab20406080100SE +/- 0.03, N = 2SE +/- 0.04, N = 276.5676.47

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamab918273645SE +/- 0.01, N = 2SE +/- 0.38, N = 235.1537.33

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamab100200300400500SE +/- 0.26, N = 2SE +/- 4.41, N = 2453.48428.67

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab100200300400500SE +/- 0.05, N = 2SE +/- 0.02, N = 2478.91479.22

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab816243240SE +/- 0.00, N = 2SE +/- 0.00, N = 233.3933.37

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab50100150200250SE +/- 0.34, N = 2SE +/- 0.12, N = 2208.85211.23

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab20406080100SE +/- 0.13, N = 2SE +/- 0.04, N = 276.5875.72

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab70140210280350SE +/- 0.05, N = 2SE +/- 0.51, N = 2295.83299.93

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab1224364860SE +/- 0.01, N = 2SE +/- 0.09, N = 254.0753.33

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab1122334455SE +/- 0.02, N = 2SE +/- 0.20, N = 246.3346.55

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab80160240320400SE +/- 0.15, N = 2SE +/- 1.63, N = 2345.15343.52

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab110220330440550SE +/- 0.18, N = 2SE +/- 0.12, N = 2504.61505.13

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab714212835SE +/- 0.01, N = 2SE +/- 0.01, N = 231.6831.65

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 4.10, N = 2SE +/- 0.80, N = 2137.38143.44

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 3.45, N = 2SE +/- 0.59, N = 2116.38111.50

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab816243240SE +/- 0.07, N = 2SE +/- 0.03, N = 233.9434.54

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab100200300400500SE +/- 1.46, N = 2SE +/- 0.20, N = 2468.80460.67

High Performance Conjugate Gradient

X Y Z: 104 104 104 - RT: 60

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 104 104 104 - RT: 60ab714212835SE +/- 0.03, N = 2SE +/- 0.01, N = 227.7827.841. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

High Performance Conjugate Gradient

X Y Z: 144 144 144 - RT: 60

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 144 144 144 - RT: 60ab612182430SE +/- 0.01, N = 2SE +/- 0.06, N = 227.4227.391. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

High Performance Conjugate Gradient

X Y Z: 160 160 160 - RT: 60

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 160 160 160 - RT: 60ab612182430SE +/- 0.03, N = 2SE +/- 0.07, N = 227.5127.401. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Timeab714212835SE +/- 0.02, N = 2SE +/- 0.05, N = 227.9727.951. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Timeab1530456075SE +/- 0.09, N = 2SE +/- 0.11, N = 267.7167.561. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh Timeab306090120150SE +/- 0.01, N = 2SE +/- 0.08, N = 2144.70144.941. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution Timeab130260390520650SE +/- 0.42, N = 2SE +/- 0.03, N = 2615.99615.461. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

Timed GDB GNU Debugger Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To Compileab1020304050SE +/- 0.06, N = 2SE +/- 0.12, N = 241.9142.01

Timed LLVM Compilation

Build System: Ninja

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjaab60120180240300SE +/- 0.15, N = 2SE +/- 0.15, N = 2263.15262.88

Timed LLVM Compilation

Build System: Unix Makefiles

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesab70140210280350SE +/- 5.08, N = 2SE +/- 5.88, N = 2323.86319.85

Timed PHP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compileab1020304050SE +/- 0.34, N = 2SE +/- 0.48, N = 242.3542.38

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigab918273645SE +/- 0.72, N = 2SE +/- 0.69, N = 240.4440.45

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigab100200300400500SE +/- 1.46, N = 2SE +/- 1.13, N = 2445.39445.38

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlyab1122334455SE +/- 0.02, N = 2SE +/- 0.08, N = 247.1547.22

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlyab306090120150SE +/- 0.05, N = 2SE +/- 0.13, N = 2127.78127.76

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlyab1428425670SE +/- 0.08, N = 2SE +/- 0.20, N = 264.0764.01

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlyab110220330440550SE +/- 0.22, N = 2SE +/- 0.42, N = 2493.45493.61

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlya4080120160200SE +/- 0.04, N = 2159.94

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastab1.33132.66263.99395.32526.6565SE +/- 0.074, N = 2SE +/- 0.015, N = 25.8425.9171. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterab3691215SE +/- 0.00, N = 2SE +/- 0.03, N = 211.0210.991. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastab48121620SE +/- 0.17, N = 2SE +/- 0.02, N = 216.1016.251. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterab714212835SE +/- 0.06, N = 2SE +/- 0.04, N = 230.9530.931. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32ab120M240M360M480M600MSE +/- 2065000.00, N = 2SE +/- 605000.00, N = 25579450005586550001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57ab200M400M600M800M1000MSE +/- 14365000.00, N = 2SE +/- 695000.00, N = 28484350008621950001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32ab200M400M600M800M1000MSE +/- 25000.00, N = 2SE +/- 85000.00, N = 28470850008476750001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57ab300M600M900M1200M1500MSE +/- 300000.00, N = 2SE +/- 4400000.00, N = 2132810000013239000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32ab300M600M900M1200M1500MSE +/- 300000.00, N = 2SE +/- 450000.00, N = 2157730000015768500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57ab400M800M1200M1600M2000MSE +/- 550000.00, N = 2SE +/- 900000.00, N = 2172885000017337000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512ab50M100M150M200M250MSE +/- 1950000.00, N = 2SE +/- 3170000.00, N = 22439400002488200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512ab80M160M240M320M400MSE +/- 1955000.00, N = 2SE +/- 4920000.00, N = 23835550003786500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512ab110M220M330M440M550MSE +/- 385000.00, N = 2SE +/- 800000.00, N = 25131350005130400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

srsRAN Project

Test: Downlink Processor Benchmark

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: Downlink Processor Benchmarkab150300450600750SE +/- 5.15, N = 2SE +/- 1.60, N = 2705.8710.91. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Total

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Totalab12002400360048006000SE +/- 143.30, N = 2SE +/- 95.40, N = 25372.95543.71. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Thread

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Threadab50100150200250SE +/- 3.55, N = 2SE +/- 0.10, N = 2240.4236.31. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200ab150K300K450K600K750K710382.44697217.55

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200ab4812162014.5814.98MAX: 679.89MAX: 612.21

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500ab300K600K900K1200K1500K1191500.881185338.02

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500ab71421283528.2728.45MAX: 671.77MAX: 664.29

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200ab200K400K600K800K1000K1045806.811042859.03

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200ab369121511.8612.18MAX: 573.1MAX: 586.62

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500ab300K600K900K1200K1500K1505080.341469808.89

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500ab61218243026.2926.64MAX: 620.79MAX: 636.93

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200ab300K600K900K1200K1500K1576432.251521587.40

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200ab36912159.499.87MAX: 845.95MAX: 820.85

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500ab400K800K1200K1600K2000K1916642.902009050.46

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500ab61218243022.9721.63MAX: 864.74MAX: 867.44

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200ab9M18M27M36M45M43074031.8434191814.86

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200ab102030405031.8343.86MAX: 790.74MAX: 2550.76

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500ab13M26M39M52M65M59041436.6456018457.87

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500ab163248648069.0873.56MAX: 1049.85MAX: 1309.93

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200ab12M24M36M48M60M54224351.1051199962.11

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200ab71421283529.5431.63MAX: 746.57MAX: 718.08

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500ab10M20M30M40M50M45677447.2446726912.46

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500ab20406080100101.2598.87MAX: 3631.89MAX: 3564.64

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200ab12M24M36M48M60M56894390.6156137174.70

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200ab71421283531.5831.69MAX: 1920.32MAX: 1610.79

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500ab14M28M42M56M70M67607191.6465935725.67

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500ab153045607568.3468.01MAX: 2006.68MAX: 1606.75

Redis 7.0.12 + memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5ab500K1000K1500K2000K2500KSE +/- 31848.80, N = 2SE +/- 39004.04, N = 22211638.652217192.121. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis 7.0.12 + memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5ab500K1000K1500K2000K2500KSE +/- 6000.63, N = 2SE +/- 3990.38, N = 22285996.172227152.021. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis 7.0.12 + memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10ab500K1000K1500K2000K2500KSE +/- 13610.76, N = 2SE +/- 4548.93, N = 22316281.262293467.621. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Redis 7.0.12 + memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10ab500K1000K1500K2000K2500KSE +/- 114392.77, N = 2SE +/- 12975.09, N = 22447092.012304730.191. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesa30K60K90K120K150KSE +/- 803.50, N = 2155626


Phoronix Test Suite v10.8.5