eoy2024

AMD EPYC 4564P 16-Core testing with a Supermicro AS-3015A-I H13SAE-MF v1.00 (2.1 BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2412061-NE-EOY20243073
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
December 05
  6 Hours, 48 Minutes
b
December 06
  6 Hours, 49 Minutes
c
December 06
  2 Hours, 24 Minutes
Invert Behavior (Only Show Selected Data)
  5 Hours, 20 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


eoy2024OpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 4564P 16-Core @ 5.88GHz (16 Cores / 32 Threads)Supermicro AS-3015A-I H13SAE-MF v1.00 (2.1 BIOS)AMD Device 14d82 x 32GB DRAM-4800MT/s Micron MTC20C2085S1EC48BA1 BC3201GB Micron_7450_MTFDKCC3T2TFS + 960GB SAMSUNG MZ1L2960HCJR-00A07ASPEEDAMD Rembrandt Radeon HD AudioVA24312 x Intel I210Ubuntu 24.046.8.0-11-generic (x86_64)GNOME Shell 45.3X Server 1.21.1.11GCC 13.2.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionEoy2024 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-fxIygj/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-fxIygj/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601209- OpenJDK Runtime Environment (build 21.0.2+13-Ubuntu-2)- Python 3.12.3- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcResult OverviewPhoronix Test Suite100%103%105%108%110%StockfishRELIONCP2K Molecular DynamicsRenaissancex265simdjsonACES DGEMMSVT-AV1NAMDEtcpakOSPRay7-Zip CompressionQuantLibBYTE Unix Benchmark

eoy2024litert: Quantized COCO SSD MobileNet v1litert: NASNet Mobilelitert: DeepLab V3litert: Mobilenet Quantcp2k: Fayalite-FISTllama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 512stockfish: Chess Benchmarkrelion: Basic - CPUlitert: Inception V4llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 1024renaissance: Apache Spark Bayesllama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 512llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 1024litert: Mobilenet Floatrenaissance: In-Memory Database Shootoutrenaissance: Scala Dottycp2k: H20-256renaissance: Rand Forestllama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 2048rustls: handshake - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384simdjson: LargeRandgcrypt: xnnpack: FP16MobileNetV2onednn: Deconvolution Batch shapes_1d - CPUlitert: Inception ResNet V2xnnpack: FP32MobileNetV2simdjson: Kostyapyperformance: asyncio_tcp_sslllamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 16xnnpack: FP32MobileNetV3Largelitert: SqueezeNetllama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 2048renaissance: Genetic Algorithm Using Jenetics + Futuresxnnpack: FP16MobileNetV3Largesimdjson: TopTweetllama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Text Generation 128xnnpack: FP32MobileNetV1renaissance: Gaussian Mixture Modelxnnpack: FP16MobileNetV1xnnpack: FP32MobileNetV3Smallrenaissance: Savina Reactors.IOrenaissance: Akka Unbalanced Cobwebbed Treesvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 4Konednn: IP Shapes 3D - CPUrenaissance: Finagle HTTP Requestsonednn: IP Shapes 1D - CPUllama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 2048cp2k: H20-64onednn: Convolution Batch Shapes Auto - CPUx265: Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 1080psvt-av1: Preset 5 - Beauty 4K 10-bitbuild-eigen: Time To Compileonednn: Deconvolution Batch shapes_3d - CPUrustls: handshake-resume - TLS13_CHACHA20_POLY1305_SHA256whisper-cpp: ggml-small.en - 2016 State of the Unionrustls: handshake-ticket - TLS13_CHACHA20_POLY1305_SHA256onnx: bertsquad-12 - CPU - Standardllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 128rustls: handshake - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256rustls: handshake-resume - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256stockfish: Chess Benchmarkpovray: Trace Timellamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 128llamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 128renaissance: ALS Movie Lenssvt-av1: Preset 13 - Bosphorus 4Konednn: Recurrent Neural Network Inference - CPUonnx: ZFNet-512 - CPU - Standardx265: Bosphorus 1080ponnx: fcn-resnet101-11 - CPU - Standardsimdjson: PartialTweetswhisperfile: Smallonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardnamd: ATPase with 327,506 Atomsonnx: ResNet101_DUC_HDC-12 - CPU - Standardcouchdb: 100 - 3000 - 30numpy: pyperformance: chaosmt-dgemm: Sustained Floating-Point Ratesvt-av1: Preset 5 - Bosphorus 1080ppyperformance: floatxnnpack: FP16MobileNetV3Smallrustls: handshake-ticket - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256xnnpack: QS8MobileNetV2whisperfile: Tinyonnx: ArcFace ResNet-100 - CPU - Standardrenaissance: Apache Spark PageRanksvt-av1: Preset 8 - Beauty 4K 10-bitpyperformance: raytracerustls: handshake-ticket - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384financebench: Bonds OpenMPonnx: CaffeNet 12-int8 - CPU - Standardcouchdb: 500 - 3000 - 30onnx: ResNet50 v1-12-int8 - CPU - Standardpyperformance: gosvt-av1: Preset 3 - Bosphorus 4Kcouchdb: 300 - 1000 - 30llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 512blender: Junkshop - CPU-Onlysvt-av1: Preset 5 - Bosphorus 4Kospray: gravity_spheres_volume/dim_512/ao/real_timeonnx: yolov4 - CPU - Standardcompress-7zip: Decompression Ratingonednn: Recurrent Neural Network Training - CPUopenvino-genai: Gemma-7b-int4-ov - CPUsvt-av1: Preset 13 - Beauty 4K 10-bitospray: gravity_spheres_volume/dim_512/scivis/real_timesvt-av1: Preset 3 - Beauty 4K 10-bitgromacs: water_GMX50_baresimdjson: DistinctUserIDblender: Classroom - CPU-Onlypyperformance: pathlibllama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Text Generation 128astcenc: Exhaustiveastcenc: Thoroughetcpak: Multi-Threaded - ETC2pyperformance: nbodysvt-av1: Preset 3 - Bosphorus 1080pllama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 1024couchdb: 500 - 1000 - 30astcenc: Mediumblender: Barbershop - CPU-Onlypyperformance: pickle_pure_pythonastcenc: Very Thoroughpyperformance: gc_collectospray: particle_volume/scivis/real_timeprimesieve: 1e13pyperformance: regex_compilellamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 16ospray: particle_volume/pathtracer/real_timepyperformance: async_tree_iollamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 128blender: Fishy Cat - CPU-Onlyprimesieve: 1e12rustls: handshake - TLS13_CHACHA20_POLY1305_SHA256financebench: Repo OpenMPpyperformance: django_templateospray: particle_volume/ao/real_timebyte: Dhrystone 2quantlib: XXSy-cruncher: 1Bllama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Text Generation 128openvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPUllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 16whisperfile: Mediumbyte: Pipellamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 16blender: BMW27 - CPU-Onlyopenssl: AES-128-GCMopenssl: AES-256-GCMpyperformance: python_startupospray: gravity_spheres_volume/dim_512/pathtracer/real_timewhisper-cpp: ggml-medium.en - 2016 State of the Unionpyperformance: asyncio_websocketsopenvino-genai: Falcon-7b-instruct-int4-ov - CPUquantlib: Spyperformance: xml_etreecompress-7zip: Compression Ratingcouchdb: 100 - 1000 - 30build2: Time To Compilebyte: System Cally-cruncher: 500Mwhisper-cpp: ggml-base.en - 2016 State of the Uniononnx: T5 Encoder - CPU - Standardpyperformance: crypto_pyaesnamd: STMV with 1,066,628 Atomscouchdb: 300 - 3000 - 30onnx: GPT-2 - CPU - Standardopenssl: ChaCha20-Poly1305openssl: ChaCha20byte: Whetstone Doubleonnx: super-resolution-10 - CPU - Standardblender: Pabellon Barcelona - CPU-Onlyastcenc: Fastrustls: handshake-resume - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384cassandra: Writesllamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 2048llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 1024llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 512llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 256llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 2048llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 1024llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 512llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 256llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 2048llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 1024llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 512llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 256llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 2048llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 1024llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 512llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 256pyperformance: json_loadsopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time To First Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time To First Tokenopenvino-genai: Gemma-7b-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Gemma-7b-int4-ov - CPU - Time To First Tokenonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: T5 Encoder - CPU - Standardonnx: ZFNet-512 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: GPT-2 - CPU - Standardrenaissance: Apache Spark ALSabc2129.52169363579.67823.1794.03268.454752796944.2721477.8355.09490.070.7670.851211.483256.1477.0592.857414.463.09423535.681.83162.12511902.9761219530.214955.9764519.0318101794.1162.97732.8149810.4647.7212523399.511439793506.44403.8339.023102.0054.0582319.41.12573279.0458.1916.6728732.57842.5586.50458.6552.41294388077.69245.07838404263.4515.589926.2880462.63563852.574650703818.54210.4720.139805.7212.52700.859102.331114.453.21679.76195.4164247.06912.796321.54196232.188775.7538.21141.194104101.97150.7920262033284441.7093542.45372412.212.4681751553632.1433061.21875636.318511.775390.59777.89.59106.13327.373.5634.5387.6394411.05521659161372.039.8318.5887.587891.4221.69210.46143.3614.27.241.684420.3025577.8175929.57369.26148.049156.2217506.21652.7416778.9848678.49869.81.78236.2457551.9971.356.34776454.4521418.44531220.79.009171866536062.713.43218.4856.8819.2824.59534.91948806257.110.2253.55104784522170971727517005.778.82093700.9131512.9312.747635.816385969.92992.05349140426.68.77287.48973156.45341.70.75656367.83134.59692393529340130588495050343491.9141.117166.12396.64951820810.212713331228861443072153632768163848192409632768163848192409632768163848192409612.151.8655.9377.3486.06101.72106.6221.2429648.5227.086012.5589823.553310.8751.5708464.1416.391129.7698590.45237.427762958.4821468.74287.06933.176102.41861.6259130265867.31523265.4328.47529.575.96661295.513081.5447.0629.557398.159.84402625.061.81154.5312473.112620375.715595.9367218.3218771860.3565.27744.3154910.846.2812903494.8117410053594.34439.9330.8799.5544.156822264.71.15274285.7157.3476.8175432.04824.8086.37159.8732.46279380493.86240.59909397022.415.318225.8379085.83504511.314575174718.84610.6419.819958.3209.773711.433103.862112.853.17059.82192.6780847.72932.790251.52109235.345765.3538.71137.394602100.89350.19312589637.9285442.2004941.96392439.912.6111771536355.933432.636719629.655517.149386.577779.554107.182324.2174.2634.2257.5740810.96031668431383.649.9118.4427.528751.4151.67910.38144.4114.37.191.672820.1644575.0259.429.37568.8149.028155.2665509.31662.72486818.9324578.94970.21.79235.325759271.76.37876083.7321522.06640620.88.966321857795366.113.429218.4036.8519.224.69532.8074448718087.110.2653.75104404347840968217370605.798.79096703.2218831612.9712.709835.716405070.11492.29249062324.18.79487.27256156.83341.80.75634368.664134.31192216350580130359884190343113141.003166.25396.42611821261.882713731228861443072153632768163848192409632768163848192409632768163848192409612.152.156.2677.1384.39100.94107.0320.9488657.427.091722.5858923.8276315.4051.5874665.27876.375669.625991.23597.4434105.22153623108939.897500.33046.8458.5624.571420.81.745.73719.110.793472.43567.84331.7338.653100.8932296.658.64732.73838.1686.3749907.4212.945114.529.682.829251127.270287102.1092439.212.5979.49534.4487.6428216732118.5567.557911.41110.43573.91429.4658.97005234.9718.985861862548305.413.49148613927.98.8119912.724216431349016743.60.75813343187OpenBenchmarking.org

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Quantized COCO SSD MobileNet v1ba60012001800240030002958.482129.52

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: NASNet Mobileba5K10K15K20K25K21468.716936.0

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: DeepLab V3ba90018002700360045004287.063579.67

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Quantba2004006008001000933.18823.17

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: Fayalite-FISTcba20406080100105.22102.4294.031. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 512ba153045607561.6268.401. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 17Chess Benchmarkcab13M26M39M52M65M5362310854752796591302651. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver

RELION

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 5.0Test: Basic - Device: CPUacb2004006008001000944.27939.90867.321. (CXX) g++ options: -fPIC -std=c++14 -fopenmp -O3 -rdynamic -lfftw3f -lfftw3 -ldl -ltiff -lpng -ljpeg -lmpi_cxx -lmpi

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception V4ba5K10K15K20K25K23265.421477.8

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 1024ba80160240320400328.47355.091. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Apache Spark Bayesbca110220330440550529.5500.3490.0MIN: 458.39 / MAX: 562.09MIN: 460.66 / MAX: 542.36MIN: 459.29 / MAX: 580.9

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512ab2040608010070.7675.961. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024ba163248648066.0070.851. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Floatba300600900120015001295.511211.48

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: In-Memory Database Shootoutabc70014002100280035003256.13081.53046.8MIN: 3019.89 / MAX: 3599.5MIN: 2836.52 / MAX: 3397.02MIN: 2814.66 / MAX: 3304.16

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Scala Dottyacb100200300400500477.0458.5447.0MIN: 371.54 / MAX: 736.5MIN: 406.93 / MAX: 746.39MIN: 402.95 / MAX: 718.21

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-256bca140280420560700629.56624.57592.861. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Random Forestcab90180270360450420.8414.4398.1MIN: 316.29 / MAX: 556.39MIN: 322.79 / MAX: 466.1MIN: 343.09 / MAX: 475.62

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048ba142842567059.8463.091. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384ba90K180K270K360K450K402625.06423535.681. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: LargeRandomcba0.41180.82361.23541.64722.0591.741.811.831. (CXX) g++ options: -O3 -lrt

Gcrypt Library

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.10.3ab4080120160200162.13154.531. (CC) gcc options: -O2 -fvisibility=hidden

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV2ba30060090012001500124711901. (CXX) g++ options: -O3 -lrt -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_1d - Engine: CPUba0.70031.40062.10092.80123.50153.112602.97612MIN: 2.4MIN: 2.421. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception ResNet V2ba4K8K12K16K20K20375.719530.2

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV2ba30060090012001500155914951. (CXX) g++ options: -O3 -lrt -lm

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: Kostyacba1.34332.68664.02995.37326.71655.735.935.971. (CXX) g++ options: -O3 -lrt

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: asyncio_tcp_sslba150300450600750672645

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 16ba51015202518.3219.03

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Largeba400800120016002000187718101. (CXX) g++ options: -O3 -lrt -lm

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: SqueezeNetba4008001200160020001860.351794.11

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 2048ab153045607562.9765.271. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Genetic Algorithm Using Jenetics + Futuresbac160320480640800744.3732.8719.1MIN: 714.12 / MAX: 802.66MIN: 713.67 / MAX: 813.49MIN: 670.9 / MAX: 764.9

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Largeba30060090012001500154914981. (CXX) g++ options: -O3 -lrt -lm

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: TopTweetacb369121510.4610.7910.801. (CXX) g++ options: -O3 -lrt

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 128ba112233445546.2847.721. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV1ba30060090012001500129012521. (CXX) g++ options: -O3 -lrt -lm

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Gaussian Mixture Modelbca70014002100280035003494.83472.43399.5MIN: 2520.23MIN: 2469.6MIN: 2471.52

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV1ba30060090012001500117411431. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Smallba200400600800100010059791. (CXX) g++ options: -O3 -lrt -lm

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Savina Reactors.IObca80016002400320040003594.33567.83506.4MIN: 3594.26 / MAX: 4599.09MAX: 5162.74MIN: 3506.38 / MAX: 4329.37

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Akka Unbalanced Cobwebbed Treebac100020003000400050004439.94403.84331.7MAX: 5696.46MAX: 5719.11MIN: 4331.69 / MAX: 5601.8

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 1080pbca70140210280350330.87338.65339.021. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 4Kbca2040608010099.55100.89102.011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 3D - Engine: CPUba0.93531.87062.80593.74124.67654.156824.05800MIN: 3.75MIN: 3.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Finagle HTTP Requestsacb50010001500200025002319.42296.62264.7MIN: 1832.84MIN: 1805.17MIN: 1788.41 / MAX: 2264.71

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 1D - Engine: CPUba0.25940.51880.77821.03761.2971.152741.12573MIN: 1.03MIN: 1.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 2048ab60120180240300279.04285.711. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-64cab132639526558.6558.1957.351. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Convolution Batch Shapes Auto - Engine: CPUba2468106.817546.67287MIN: 6.2MIN: 6.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

x265

OpenBenchmarking.orgFrames Per Second, More Is Betterx265Video Input: Bosphorus 4Kbac81624324032.0432.5732.731. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 1080pbca2004006008001000824.81838.17842.561. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Beauty 4K 10-bitbca2468106.3716.3746.5041. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Timed Eigen Compilation

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.4.0Time To Compileba132639526559.8758.66

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_3d - Engine: CPUba0.55411.10821.66232.21642.77052.462792.41294MIN: 2.35MIN: 2.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS13_CHACHA20_POLY1305_SHA256ba80K160K240K320K400K380493.86388077.691. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-small.en - Input: 2016 State of the Unionab50100150200250245.08240.601. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS13_CHACHA20_POLY1305_SHA256ba90K180K270K360K450K397022.40404263.451. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

ONNX Runtime

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Standardba4812162015.3215.591. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 128ba61218243025.8326.28

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256ba20K40K60K80K100K79085.880462.61. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256ba800K1600K2400K3200K4000K3504511.313563852.571. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfishChess Benchmarkba10M20M30M40M50M45751747465070381. Stockfish 16 by the Stockfish developers (see AUTHORS file)

POV-Ray

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-RayTrace Timeba51015202518.8518.541. POV-Ray 3.7.0.10.unofficial

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 128ab369121510.4710.64

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 128ba51015202519.8120.13

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: ALS Movie Lensbca2K4K6K8K10K9958.39907.49805.7MIN: 9305.94 / MAX: 10040.58MIN: 9393.64 / MAX: 10087.8MIN: 9253.4 / MAX: 10057.61

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 4Kbac50100150200250209.77212.52212.951. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Inference - Engine: CPUba150300450600750711.43700.86MIN: 684.03MIN: 679.891. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

ONNX Runtime

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standardab20406080100102.33103.861. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

x265

OpenBenchmarking.orgFrames Per Second, More Is Betterx265Video Input: Bosphorus 1080pbac306090120150112.85114.45114.521. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6

ONNX Runtime

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standardba0.72381.44762.17142.89523.6193.17053.21671. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: PartialTweetscab36912159.689.769.821. (CXX) g++ options: -O3 -lrt

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Smallab4080120160200195.42192.68

ONNX Runtime

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standardab112233445547.0747.731. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: ATPase with 327,506 Atomsbac0.63661.27321.90982.54643.1832.790252.796322.82925

ONNX Runtime

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standardba0.34690.69381.04071.38761.73451.521091.541961. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 100 - Inserts: 3000 - Rounds: 30ba50100150200250235.35232.191. (CXX) g++ options: -flto -lstdc++ -shared -lei

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkba2004006008001000765.35775.75

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: chaosba91827364538.738.2

ACES DGEMM

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Ratecba20040060080010001127.271137.391141.191. (CC) gcc options: -ffast-math -mavx2 -O3 -fopenmp -lopenblas

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 1080pbac20406080100100.89101.97102.111. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: floatab112233445550.750.1

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Smallba20040060080010009319201. (CXX) g++ options: -O3 -lrt -lm

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256ba600K1200K1800K2400K3000K2589637.922620332.001. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: QS8MobileNetV2ba20040060080010008548441. (CXX) g++ options: -O3 -lrt -lm

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Tinyba102030405042.2041.71

ONNX Runtime

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardba102030405041.9642.451. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Apache Spark PageRankbca50010001500200025002439.92439.22412.2MIN: 1684.02 / MAX: 2439.95MIN: 1679.36 / MAX: 2439.21MIN: 1691.04

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Beauty 4K 10-bitacb369121512.4712.6012.611. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: raytraceba4080120160200177175

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384ba300K600K900K1200K1500K1536355.901553632.141. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPba7K14K21K28K35K33432.6433061.221. (CXX) g++ options: -O3 -march=native -fopenmp

ONNX Runtime

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standardba140280420560700629.66636.321. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 500 - Inserts: 3000 - Rounds: 30ba110220330440550517.15511.781. (CXX) g++ options: -flto -lstdc++ -shared -lei

ONNX Runtime

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standardba80160240320400386.58390.601. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: goab2040608010077.877.0

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 4Kcba36912159.4959.5549.5901. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 300 - Inserts: 1000 - Rounds: 30ba20406080100107.18106.131. (CXX) g++ options: -flto -lstdc++ -shared -lei

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 512ba70140210280350324.21327.301. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Junkshop - Compute: CPU-Onlyba163248648074.2673.56

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 4Kbca81624324034.2334.4534.541. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/ao/real_timebac2468107.574087.639447.64282

ONNX Runtime

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standardba369121510.9611.061. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

7-Zip Compression

OpenBenchmarking.orgMIPS, More Is Better7-Zip CompressionTest: Decompression Ratingabc40K80K120K160K200K1659161668431673211. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Training - Engine: CPUba300600900120015001383.641372.03MIN: 1333.57MIN: 1342.061. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenVINO GenAI

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPUab36912159.839.91

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Beauty 4K 10-bitbca51015202518.4418.5618.591. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/scivis/real_timebca2468107.528757.557917.58789

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Beauty 4K 10-bitcba0.320.640.961.281.61.4111.4151.4221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

GROMACS

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACSInput: water_GMX50_bareba0.38070.76141.14211.52281.90351.6791.6921. GROMACS version: 2023.3-Ubuntu_2023.3_1ubuntu3

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: DistinctUserIDbca369121510.3810.4310.461. (CXX) g++ options: -O3 -lrt

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Classroom - Compute: CPU-Onlyba306090120150144.41143.36

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: pathlibba4812162014.314.2

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128ba2468107.197.241. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Exhaustiveba0.3790.7581.1371.5161.8951.67281.68441. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Thoroughba51015202520.1620.301. (CXX) g++ options: -O3 -flto -pthread

Etcpak

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 2.0Benchmark: Multi-Threaded - Configuration: ETC2cba120240360480600573.91575.02577.821. (CXX) g++ options: -flto -pthread

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: nbodyba132639526559.459.0

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 1080pbca71421283529.3829.4729.571. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 1024ba153045607568.8069.261. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 500 - Inserts: 1000 - Rounds: 30ba306090120150149.03148.051. (CXX) g++ options: -flto -lstdc++ -shared -lei

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Mediumba306090120150155.27156.221. (CXX) g++ options: -O3 -flto -pthread

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Barbershop - Compute: CPU-Onlyba110220330440550509.3506.2

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: pickle_pure_pythonba4080120160200166165

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Very Thoroughba0.61671.23341.85012.46683.08352.72482.74101. (CXX) g++ options: -O3 -flto -pthread

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: gc_collectba150300450600750681677

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/scivis/real_timebca36912158.932458.970058.98486

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.6Length: 1e13ba2040608010078.9578.501. (CXX) g++ options: -O3

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: regex_compileba163248648070.269.8

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 16ab0.40280.80561.20841.61122.0141.781.79

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/pathtracer/real_timecba50100150200250234.97235.33236.25

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: async_tree_ioba160320480640800759755

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 128ab0.450.91.351.82.251.992.00

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Fishy Cat - Compute: CPU-Onlyba163248648071.7071.35

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.6Length: 1e12ba2468106.3786.3471. (CXX) g++ options: -O3

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS13_CHACHA20_POLY1305_SHA256ba16K32K48K64K80K76083.7376454.451. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPba5K10K15K20K25K21522.0721418.451. (CXX) g++ options: -O3 -march=native -fopenmp

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: django_templateba51015202520.820.7

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/ao/real_timebca36912158.966328.985869.00917

BYTE Unix Benchmark

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Dhrystone 2bca400M800M1200M1600M2000M1857795366.11862548305.41866536062.71. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

QuantLib

OpenBenchmarking.orgtasks/s, More Is BetterQuantLib 1.35-devSize: XXSbac369121513.4313.4313.491. (CXX) g++ options: -O3 -march=native -fPIE -pie

Y-Cruncher

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.5Pi Digits To Calculate: 1Bab51015202518.4918.40

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128ba2468106.856.881. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenVINO GenAI

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPUba51015202519.2019.28

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 16ab61218243024.5924.69

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Mediumab120240360480600534.92532.81

BYTE Unix Benchmark

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Pipecba10M20M30M40M50M48613927.948718087.148806257.11. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16ab369121510.2210.26

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: BMW27 - Compute: CPU-Onlyba122436486053.7553.55

OpenSSL

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: AES-128-GCMba20000M40000M60000M80000M100000M1044043478401047845221701. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: AES-256-GCMba20000M40000M60000M80000M100000M96821737060971727517001. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: python_startupba1.30282.60563.90845.21126.5145.795.77

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timebca2468108.790968.811998.82093

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-medium.en - Input: 2016 State of the Unionba150300450600750703.22700.911. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: asyncio_websocketsba70140210280350316315

OpenVINO GenAI

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPUab369121512.9312.97

QuantLib

OpenBenchmarking.orgtasks/s, More Is BetterQuantLib 1.35-devSize: Sbca369121512.7112.7212.751. (CXX) g++ options: -O3 -march=native -fPIE -pie

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: xml_etreeab81624324035.835.7

7-Zip Compression

OpenBenchmarking.orgMIPS, More Is Better7-Zip CompressionTest: Compression Ratingabc40K80K120K160K200K1638591640501643131. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 100 - Inserts: 1000 - Rounds: 30ba163248648070.1169.931. (CXX) g++ options: -flto -lstdc++ -shared -lei

Build2

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.17Time To Compileba2040608010092.2992.05

BYTE Unix Benchmark

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: System Callcba11M22M33M44M55M49016743.649062324.149140426.61. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

Y-Cruncher

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.5Pi Digits To Calculate: 500Mba2468108.7948.772

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-base.en - Input: 2016 State of the Unionab2040608010087.4987.271. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

ONNX Runtime

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standardab306090120150156.45156.831. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: crypto_pyaesba102030405041.841.7

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: STMV with 1,066,628 Atomsbac0.17060.34120.51180.68240.8530.756340.756560.75813

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 300 - Inserts: 3000 - Rounds: 30ba80160240320400368.66367.831. (CXX) g++ options: -flto -lstdc++ -shared -lei

ONNX Runtime

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Standardba306090120150134.31134.601. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenSSL

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: ChaCha20-Poly1305ba20000M40000M60000M80000M100000M92216350580923935293401. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: ChaCha20ba30000M60000M90000M120000M150000M1303598841901305884950501. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

BYTE Unix Benchmark

OpenBenchmarking.orgMWIPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Whetstone Doublebca70K140K210K280K350K343113.0343187.0343491.91. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

ONNX Runtime

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standardba306090120150141.00141.121. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Pabellon Barcelona - Compute: CPU-Onlyba4080120160200166.25166.12

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Fastba90180270360450396.43396.651. (CXX) g++ options: -O3 -flto -pthread

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384ab400K800K1200K1600K2000K1820810.211821261.881. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

Apache Cassandra

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 5.0Test: Writesab60K120K180K240K300K271333271373

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 2048ab3K6K9K12K15K1228812288

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 1024ab1300260039005200650061446144

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 512ab700140021002800350030723072

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 256ab3006009001200150015361536

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 2048ab7K14K21K28K35K3276832768

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 1024ab4K8K12K16K20K1638416384

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 512ab2K4K6K8K10K81928192

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 256ab900180027003600450040964096

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 2048ab7K14K21K28K35K3276832768

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 1024ab4K8K12K16K20K1638416384

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 512ab2K4K6K8K10K81928192

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 256ab900180027003600450040964096

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 2048ab7K14K21K28K35K3276832768

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 1024ab4K8K12K16K20K1638416384

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 512ab2K4K6K8K10K81928192

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 256ab900180027003600450040964096

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: json_loadsba369121512.112.1

OpenVINO GenAI

Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU

a: The test quit with a non-zero exit status. E: RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:

b: The test quit with a non-zero exit status. E: RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:

OpenSSL

Algorithm: RSA4096

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

Algorithm: SHA512

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

Algorithm: SHA256

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

Renaissance

Test: Apache Spark ALS

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

195 Results Shown

LiteRT:
  Quantized COCO SSD MobileNet v1
  NASNet Mobile
  DeepLab V3
  Mobilenet Quant
CP2K Molecular Dynamics
Llama.cpp
Stockfish
RELION
LiteRT
Llama.cpp
Renaissance
Llama.cpp:
  CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 512
  CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 1024
LiteRT
Renaissance:
  In-Memory Database Shootout
  Scala Dotty
CP2K Molecular Dynamics
Renaissance
Llama.cpp
Rustls
simdjson
Gcrypt Library
XNNPACK
oneDNN
LiteRT
XNNPACK
simdjson
PyPerformance
Llamafile
XNNPACK
LiteRT
Llama.cpp
Renaissance
XNNPACK
simdjson
Llama.cpp
XNNPACK
Renaissance
XNNPACK:
  FP16MobileNetV1
  FP32MobileNetV3Small
Renaissance:
  Savina Reactors.IO
  Akka Unbalanced Cobwebbed Tree
SVT-AV1:
  Preset 8 - Bosphorus 1080p
  Preset 8 - Bosphorus 4K
oneDNN
Renaissance
oneDNN
Llama.cpp
CP2K Molecular Dynamics
oneDNN
x265
SVT-AV1:
  Preset 13 - Bosphorus 1080p
  Preset 5 - Beauty 4K 10-bit
Timed Eigen Compilation
oneDNN
Rustls
Whisper.cpp
Rustls
ONNX Runtime
Llamafile
Rustls:
  handshake - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  handshake-resume - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
Stockfish
POV-Ray
Llamafile:
  mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 128
  Llama-3.2-3B-Instruct.Q6_K - Text Generation 128
Renaissance
SVT-AV1
oneDNN
ONNX Runtime
x265
ONNX Runtime
simdjson
Whisperfile
ONNX Runtime
NAMD
ONNX Runtime
Apache CouchDB
Numpy Benchmark
PyPerformance
ACES DGEMM
SVT-AV1
PyPerformance
XNNPACK
Rustls
XNNPACK
Whisperfile
ONNX Runtime
Renaissance
SVT-AV1
PyPerformance
Rustls
FinanceBench
ONNX Runtime
Apache CouchDB
ONNX Runtime
PyPerformance
SVT-AV1
Apache CouchDB
Llama.cpp
Blender
SVT-AV1
OSPRay
ONNX Runtime
7-Zip Compression
oneDNN
OpenVINO GenAI
SVT-AV1
OSPRay
SVT-AV1
GROMACS
simdjson
Blender
PyPerformance
Llama.cpp
ASTC Encoder:
  Exhaustive
  Thorough
Etcpak
PyPerformance
SVT-AV1
Llama.cpp
Apache CouchDB
ASTC Encoder
Blender
PyPerformance
ASTC Encoder
PyPerformance
OSPRay
Primesieve
PyPerformance
Llamafile
OSPRay
PyPerformance
Llamafile
Blender
Primesieve
Rustls
FinanceBench
PyPerformance
OSPRay
BYTE Unix Benchmark
QuantLib
Y-Cruncher
Llama.cpp
OpenVINO GenAI
Llamafile
Whisperfile
BYTE Unix Benchmark
Llamafile
Blender
OpenSSL:
  AES-128-GCM
  AES-256-GCM
PyPerformance
OSPRay
Whisper.cpp
PyPerformance
OpenVINO GenAI
QuantLib
PyPerformance
7-Zip Compression
Apache CouchDB
Build2
BYTE Unix Benchmark
Y-Cruncher
Whisper.cpp
ONNX Runtime
PyPerformance
NAMD
Apache CouchDB
ONNX Runtime
OpenSSL:
  ChaCha20-Poly1305
  ChaCha20
BYTE Unix Benchmark
ONNX Runtime
Blender
ASTC Encoder
Rustls
Apache Cassandra
Llamafile:
  wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 2048
  wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 1024
  wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 512
  wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 256
  mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 2048
  mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 1024
  mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 512
  mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 256
  TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 2048
  TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 1024
  TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 512
  TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 256
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 2048
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 1024
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 512
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 256
PyPerformance