eoy2024

AMD EPYC 4564P 16-Core testing with a Supermicro AS-3015A-I H13SAE-MF v1.00 (2.1 BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2412061-NE-EOY20243073
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
December 05
  6 Hours, 48 Minutes
b
December 06
  6 Hours, 49 Minutes
c
December 06
  2 Hours, 24 Minutes
Invert Behavior (Only Show Selected Data)
  5 Hours, 20 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


eoy2024OpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 4564P 16-Core @ 5.88GHz (16 Cores / 32 Threads)Supermicro AS-3015A-I H13SAE-MF v1.00 (2.1 BIOS)AMD Device 14d82 x 32GB DRAM-4800MT/s Micron MTC20C2085S1EC48BA1 BC3201GB Micron_7450_MTFDKCC3T2TFS + 960GB SAMSUNG MZ1L2960HCJR-00A07ASPEEDAMD Rembrandt Radeon HD AudioVA24312 x Intel I210Ubuntu 24.046.8.0-11-generic (x86_64)GNOME Shell 45.3X Server 1.21.1.11GCC 13.2.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionEoy2024 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-fxIygj/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-fxIygj/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601209- OpenJDK Runtime Environment (build 21.0.2+13-Ubuntu-2)- Python 3.12.3- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcResult OverviewPhoronix Test Suite100%103%105%108%110%StockfishRELIONCP2K Molecular DynamicsRenaissancex265simdjsonACES DGEMMSVT-AV1NAMDEtcpakOSPRay7-Zip CompressionQuantLibBYTE Unix Benchmark

eoy2024etcpak: Multi-Threaded - ETC2cp2k: H20-64cp2k: H20-256cp2k: Fayalite-FISTnamd: ATPase with 327,506 Atomsnamd: STMV with 1,066,628 Atomsquantlib: Squantlib: XXSrelion: Basic - CPUsimdjson: Kostyasimdjson: TopTweetsimdjson: LargeRandsimdjson: PartialTweetssimdjson: DistinctUserIDrenaissance: Scala Dottyrenaissance: Rand Forestrenaissance: ALS Movie Lensrenaissance: Apache Spark Bayesrenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRankrenaissance: Finagle HTTP Requestsrenaissance: Gaussian Mixture Modelrenaissance: In-Memory Database Shootoutrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Genetic Algorithm Using Jenetics + Futuresbyte: Pipebyte: Dhrystone 2byte: System Callbyte: Whetstone Doublesvt-av1: Preset 3 - Bosphorus 4Ksvt-av1: Preset 5 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 3 - Bosphorus 1080psvt-av1: Preset 5 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080psvt-av1: Preset 3 - Beauty 4K 10-bitsvt-av1: Preset 5 - Beauty 4K 10-bitsvt-av1: Preset 8 - Beauty 4K 10-bitsvt-av1: Preset 13 - Beauty 4K 10-bitx265: Bosphorus 4Kx265: Bosphorus 1080pmt-dgemm: Sustained Floating-Point Rateospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timecompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingstockfish: Chess Benchmarkstockfish: Chess Benchmarkbuild2: Time To Compileprimesieve: 1e12primesieve: 1e13y-cruncher: 1By-cruncher: 500Mpovray: Trace Timeonednn: IP Shapes 1D - CPUonednn: IP Shapes 3D - CPUonednn: Convolution Batch Shapes Auto - CPUonednn: Deconvolution Batch shapes_1d - CPUonednn: Deconvolution Batch shapes_3d - CPUonednn: Recurrent Neural Network Training - CPUonednn: Recurrent Neural Network Inference - CPUnumpy: build-eigen: Time To Compilegcrypt: rustls: handshake - TLS13_CHACHA20_POLY1305_SHA256rustls: handshake - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256rustls: handshake-resume - TLS13_CHACHA20_POLY1305_SHA256rustls: handshake-ticket - TLS13_CHACHA20_POLY1305_SHA256rustls: handshake - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384rustls: handshake-resume - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256rustls: handshake-ticket - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256rustls: handshake-resume - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384rustls: handshake-ticket - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384openssl: ChaCha20openssl: AES-128-GCMopenssl: AES-256-GCMopenssl: ChaCha20-Poly1305couchdb: 100 - 1000 - 30couchdb: 100 - 3000 - 30couchdb: 300 - 1000 - 30couchdb: 300 - 3000 - 30couchdb: 500 - 1000 - 30couchdb: 500 - 3000 - 30financebench: Repo OpenMPfinancebench: Bonds OpenMPastcenc: Fastastcenc: Mediumastcenc: Thoroughastcenc: Exhaustiveastcenc: Very Thoroughgromacs: water_GMX50_barelitert: DeepLab V3litert: SqueezeNetlitert: Inception V4litert: NASNet Mobilelitert: Mobilenet Floatlitert: Mobilenet Quantlitert: Inception ResNet V2litert: Quantized COCO SSD MobileNet v1xnnpack: FP32MobileNetV1xnnpack: FP32MobileNetV2xnnpack: FP32MobileNetV3Largexnnpack: FP32MobileNetV3Smallxnnpack: FP16MobileNetV1xnnpack: FP16MobileNetV2xnnpack: FP16MobileNetV3Largexnnpack: FP16MobileNetV3Smallxnnpack: QS8MobileNetV2blender: BMW27 - CPU-Onlyblender: Junkshop - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlycassandra: Writespyperformance: gopyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: xml_etreepyperformance: gc_collectpyperformance: json_loadspyperformance: crypto_pyaespyperformance: async_tree_iopyperformance: regex_compilepyperformance: python_startuppyperformance: asyncio_tcp_sslpyperformance: django_templatepyperformance: asyncio_websocketspyperformance: pickle_pure_pythononnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: ZFNet-512 - CPU - Standardonnx: ZFNet-512 - CPU - Standardonnx: T5 Encoder - CPU - Standardonnx: T5 Encoder - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardwhisper-cpp: ggml-base.en - 2016 State of the Unionwhisper-cpp: ggml-small.en - 2016 State of the Unionwhisper-cpp: ggml-medium.en - 2016 State of the Unionwhisperfile: Tinywhisperfile: Smallwhisperfile: Mediumllama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Text Generation 128llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 512llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 1024llama-cpp: CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 2048llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Text Generation 128llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 512llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 1024llama-cpp: CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 2048llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Text Generation 128llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 512llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 1024llama-cpp: CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 2048llamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 16llamafile: Llama-3.2-3B-Instruct.Q6_K - Text Generation 128llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 256llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 512llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 16llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 1024llamafile: Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 2048llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 128llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 16llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 256llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 512llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 128llamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 16llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 1024llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 2048llamafile: wizardcoder-python-34b-v1.0.Q6_K - Text Generation 128llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 256llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 512llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 1024llamafile: mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 2048llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 256llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 512llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 1024llamafile: wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 2048openvino-genai: Gemma-7b-int4-ov - CPUopenvino-genai: Gemma-7b-int4-ov - CPU - Time To First Tokenopenvino-genai: Gemma-7b-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPUopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time To First Tokenopenvino-genai: Falcon-7b-instruct-int4-ov - CPU - Time Per Output Tokenopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPUopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time To First Tokenopenvino-genai: Phi-3-mini-128k-instruct-int4-ov - CPU - Time Per Output Tokenabc577.81758.191592.85794.0322.796320.7565612.747613.432944.275.9710.461.839.7610.46477.0414.49805.7490.03506.42412.22319.43399.53256.14403.8732.848806257.11866536062.749140426.6343491.99.5934.538102.005212.5229.573101.971339.023842.5581.4226.50412.46818.58832.57114.451141.1941049.009178.98486236.2457.639447.587898.82093163859165916547527964650703892.0536.34778.49818.4858.77218.5421.125734.0586.672872.976122.412941372.03700.859775.7558.655162.12576454.4580462.6388077.69404263.45423535.683563852.5726203321820810.211553632.14130588495050104784522170971727517009239352934069.929232.188106.13367.83148.049511.77521418.44531233061.21875396.6495156.221720.30251.68442.7411.6923579.671794.1121477.8169361211.48823.1719530.22129.5212521495181097911431190149892084453.5573.56143.3671.35506.2166.1227133377.838.250.75914.217535.867712.141.775569.85.7764520.7315165134.5967.4277611.055290.4523102.3319.76985156.4536.3911215.589964.141636.3181.570843.2167310.87542.453723.553390.5972.55898141.1177.086011.54196648.52247.069121.242987.48973245.07838700.9141.70935195.41642534.9196.8870.7670.8563.097.2468.469.2662.9747.72327.3355.09279.0419.0320.134096819224.59163843276826.2810.224096819210.471.7816384327681.99409681921638432768153630726144122889.83106.62101.7212.9386.0677.3419.2855.9351.86575.0257.347629.557102.4182.790250.7563412.709813.4292867.3155.9310.81.819.8210.38447.0398.19958.3529.53594.32439.92264.73494.83081.54439.9744.348718087.11857795366.149062324.13431139.55434.22599.554209.77329.375100.893330.87824.8081.4156.37112.61118.44232.04112.851137.3946028.966328.93245235.3257.574087.528758.79096164050166843591302654575174792.2926.37878.94918.4038.79418.8461.152744.156826.817543.11262.462791383.64711.433765.3559.873154.5376083.7379085.8380493.86397022.4402625.063504511.312589637.921821261.881536355.9130359884190104404347840968217370609221635058070.114235.345107.182368.664149.028517.14921522.06640633432.636719396.4261155.266520.16441.67282.72481.6794287.061860.3523265.421468.71295.51933.17620375.72958.48129015591877100511741247154993185453.7574.26144.4171.7509.3166.252713737738.750.159.414.317735.768112.141.875970.25.7967220.8316166134.3117.443410.960391.2359103.8629.6259156.8336.3756615.318265.2787629.6551.587463.1705315.40541.963923.8276386.5772.58589141.0037.091721.52109657.4247.729320.948887.27256240.59909703.2218842.20049192.67808532.807446.8575.966659.847.1961.6268.865.2746.28324.21328.47285.7118.3219.814096819224.69163843276825.8310.264096819210.641.7916384327682409681921638432768153630726144122889.91107.03100.9412.9784.3977.1319.256.2652.1573.91458.647624.571105.2212.829250.7581312.724213.491939.8975.7310.791.749.6810.43458.5420.89907.4500.33567.82439.22296.63472.43046.84331.7719.148613927.91862548305.449016743.63431879.49534.448100.893212.94529.465102.109338.653838.1681.4116.37412.59718.55632.73114.521127.2702878.985868.97005234.9717.642827.557918.8119916431316732153623108OpenBenchmarking.org

Etcpak

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 2.0Benchmark: Multi-Threaded - Configuration: ETC2cba120240360480600573.91575.02577.821. (CXX) g++ options: -flto -pthread

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-64cba132639526558.6557.3558.191. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-256cba140280420560700624.57629.56592.861. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: Fayalite-FISTcba20406080100105.22102.4294.031. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: ATPase with 327,506 Atomscba0.63661.27321.90982.54643.1832.829252.790252.79632

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: STMV with 1,066,628 Atomscba0.17060.34120.51180.68240.8530.758130.756340.75656

QuantLib

OpenBenchmarking.orgtasks/s, More Is BetterQuantLib 1.35-devSize: Scba369121512.7212.7112.751. (CXX) g++ options: -O3 -march=native -fPIE -pie

OpenBenchmarking.orgtasks/s, More Is BetterQuantLib 1.35-devSize: XXScba369121513.4913.4313.431. (CXX) g++ options: -O3 -march=native -fPIE -pie

RELION

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 5.0Test: Basic - Device: CPUcba2004006008001000939.90867.32944.271. (CXX) g++ options: -fPIC -std=c++14 -fopenmp -O3 -rdynamic -lfftw3f -lfftw3 -ldl -ltiff -lpng -ljpeg -lmpi_cxx -lmpi

simdjson

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: Kostyacba1.34332.68664.02995.37326.71655.735.935.971. (CXX) g++ options: -O3 -lrt

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: TopTweetcba369121510.7910.8010.461. (CXX) g++ options: -O3 -lrt

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: LargeRandomcba0.41180.82361.23541.64722.0591.741.811.831. (CXX) g++ options: -O3 -lrt

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: PartialTweetscba36912159.689.829.761. (CXX) g++ options: -O3 -lrt

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 3.10Throughput Test: DistinctUserIDcba369121510.4310.3810.461. (CXX) g++ options: -O3 -lrt

Renaissance

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Scala Dottycba100200300400500458.5447.0477.0MIN: 406.93 / MAX: 746.39MIN: 402.95 / MAX: 718.21MIN: 371.54 / MAX: 736.5

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Random Forestcba90180270360450420.8398.1414.4MIN: 316.29 / MAX: 556.39MIN: 343.09 / MAX: 475.62MIN: 322.79 / MAX: 466.1

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: ALS Movie Lenscba2K4K6K8K10K9907.49958.39805.7MIN: 9393.64 / MAX: 10087.8MIN: 9305.94 / MAX: 10040.58MIN: 9253.4 / MAX: 10057.61

Test: Apache Spark ALS

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Apache Spark Bayescba110220330440550500.3529.5490.0MIN: 460.66 / MAX: 542.36MIN: 458.39 / MAX: 562.09MIN: 459.29 / MAX: 580.9

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Savina Reactors.IOcba80016002400320040003567.83594.33506.4MAX: 5162.74MIN: 3594.26 / MAX: 4599.09MIN: 3506.38 / MAX: 4329.37

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Apache Spark PageRankcba50010001500200025002439.22439.92412.2MIN: 1679.36 / MAX: 2439.21MIN: 1684.02 / MAX: 2439.95MIN: 1691.04

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Finagle HTTP Requestscba50010001500200025002296.62264.72319.4MIN: 1805.17MIN: 1788.41 / MAX: 2264.71MIN: 1832.84

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Gaussian Mixture Modelcba70014002100280035003472.43494.83399.5MIN: 2469.6MIN: 2520.23MIN: 2471.52

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: In-Memory Database Shootoutcba70014002100280035003046.83081.53256.1MIN: 2814.66 / MAX: 3304.16MIN: 2836.52 / MAX: 3397.02MIN: 3019.89 / MAX: 3599.5

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Akka Unbalanced Cobwebbed Treecba100020003000400050004331.74439.94403.8MIN: 4331.69 / MAX: 5601.8MAX: 5696.46MAX: 5719.11

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.16Test: Genetic Algorithm Using Jenetics + Futurescba160320480640800719.1744.3732.8MIN: 670.9 / MAX: 764.9MIN: 714.12 / MAX: 802.66MIN: 713.67 / MAX: 813.49

BYTE Unix Benchmark

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Pipecba10M20M30M40M50M48613927.948718087.148806257.11. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Dhrystone 2cba400M800M1200M1600M2000M1862548305.41857795366.11866536062.71. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: System Callcba11M22M33M44M55M49016743.649062324.149140426.61. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

OpenBenchmarking.orgMWIPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Whetstone Doublecba70K140K210K280K350K343187.0343113.0343491.91. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 4Kcba36912159.4959.5549.5901. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 4Kcba81624324034.4534.2334.541. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 4Kcba20406080100100.8999.55102.011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 4Kcba50100150200250212.95209.77212.521. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 1080pcba71421283529.4729.3829.571. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 1080pcba20406080100102.11100.89101.971. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 1080pcba70140210280350338.65330.87339.021. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 1080pcba2004006008001000838.17824.81842.561. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Beauty 4K 10-bitcba0.320.640.961.281.61.4111.4151.4221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Beauty 4K 10-bitcba2468106.3746.3716.5041. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Beauty 4K 10-bitcba369121512.6012.6112.471. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Beauty 4K 10-bitcba51015202518.5618.4418.591. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

x265

OpenBenchmarking.orgFrames Per Second, More Is Betterx265Video Input: Bosphorus 4Kcba81624324032.7332.0432.571. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6

OpenBenchmarking.orgFrames Per Second, More Is Betterx265Video Input: Bosphorus 1080pcba306090120150114.52112.85114.451. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6

ACES DGEMM

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Ratecba20040060080010001127.271137.391141.191. (CC) gcc options: -ffast-math -mavx2 -O3 -fopenmp -lopenblas

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/ao/real_timecba36912158.985868.966329.00917

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/scivis/real_timecba36912158.970058.932458.98486

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: particle_volume/pathtracer/real_timecba50100150200250234.97235.33236.25

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/ao/real_timecba2468107.642827.574087.63944

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/scivis/real_timecba2468107.557917.528757.58789

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.2Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timecba2468108.811998.790968.82093

7-Zip Compression

OpenBenchmarking.orgMIPS, More Is Better7-Zip CompressionTest: Compression Ratingcba40K80K120K160K200K1643131640501638591. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20

OpenBenchmarking.orgMIPS, More Is Better7-Zip CompressionTest: Decompression Ratingcba40K80K120K160K200K1673211668431659161. 7-Zip 23.01 (x64) : Copyright (c) 1999-2023 Igor Pavlov : 2023-06-20

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 17Chess Benchmarkcba13M26M39M52M65M5362310859130265547527961. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfishChess Benchmarkba10M20M30M40M50M45751747465070381. Stockfish 16 by the Stockfish developers (see AUTHORS file)

Build2

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.17Time To Compileba2040608010092.2992.05

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.6Length: 1e12ba2468106.3786.3471. (CXX) g++ options: -O3

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.6Length: 1e13ba2040608010078.9578.501. (CXX) g++ options: -O3

Y-Cruncher

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.5Pi Digits To Calculate: 1Bba51015202518.4018.49

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.5Pi Digits To Calculate: 500Mba2468108.7948.772

POV-Ray

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-RayTrace Timeba51015202518.8518.541. POV-Ray 3.7.0.10.unofficial

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 1D - Engine: CPUba0.25940.51880.77821.03761.2971.152741.12573MIN: 1.03MIN: 1.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 3D - Engine: CPUba0.93531.87062.80593.74124.67654.156824.05800MIN: 3.75MIN: 3.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Convolution Batch Shapes Auto - Engine: CPUba2468106.817546.67287MIN: 6.2MIN: 6.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_1d - Engine: CPUba0.70031.40062.10092.80123.50153.112602.97612MIN: 2.4MIN: 2.421. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_3d - Engine: CPUba0.55411.10821.66232.21642.77052.462792.41294MIN: 2.35MIN: 2.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Training - Engine: CPUba300600900120015001383.641372.03MIN: 1333.57MIN: 1342.061. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Inference - Engine: CPUba150300450600750711.43700.86MIN: 684.03MIN: 679.891. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkba2004006008001000765.35775.75

Timed Eigen Compilation

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.4.0Time To Compileba132639526559.8758.66

Gcrypt Library

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.10.3ba4080120160200154.53162.131. (CC) gcc options: -O2 -fvisibility=hidden

Rustls

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS13_CHACHA20_POLY1305_SHA256ba16K32K48K64K80K76083.7376454.451. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256ba20K40K60K80K100K79085.880462.61. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS13_CHACHA20_POLY1305_SHA256ba80K160K240K320K400K380493.86388077.691. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS13_CHACHA20_POLY1305_SHA256ba90K180K270K360K450K397022.40404263.451. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384ba90K180K270K360K450K402625.06423535.681. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256ba800K1600K2400K3200K4000K3504511.313563852.571. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256ba600K1200K1800K2400K3000K2589637.922620332.001. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-resume - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384ba400K800K1200K1600K2000K1821261.881820810.211. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenBenchmarking.orghandshakes/s, More Is BetterRustls 0.23.17Benchmark: handshake-ticket - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384ba300K600K900K1200K1500K1536355.901553632.141. (CC) gcc options: -m64 -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc -pie -nodefaultlibs

OpenSSL

Algorithm: SHA256

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

Algorithm: SHA512

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

Algorithm: RSA4096

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: ChaCha20ba30000M60000M90000M120000M150000M1303598841901305884950501. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: AES-128-GCMba20000M40000M60000M80000M100000M1044043478401047845221701. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: AES-256-GCMba20000M40000M60000M80000M100000M96821737060971727517001. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSLAlgorithm: ChaCha20-Poly1305ba20000M40000M60000M80000M100000M92216350580923935293401. OpenSSL 3.0.13 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) - Additional Parameters: -engine qatengine -async_jobs 8

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 100 - Inserts: 1000 - Rounds: 30ba163248648070.1169.931. (CXX) g++ options: -flto -lstdc++ -shared -lei

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 100 - Inserts: 3000 - Rounds: 30ba50100150200250235.35232.191. (CXX) g++ options: -flto -lstdc++ -shared -lei

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 300 - Inserts: 1000 - Rounds: 30ba20406080100107.18106.131. (CXX) g++ options: -flto -lstdc++ -shared -lei

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 300 - Inserts: 3000 - Rounds: 30ba80160240320400368.66367.831. (CXX) g++ options: -flto -lstdc++ -shared -lei

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 500 - Inserts: 1000 - Rounds: 30ba306090120150149.03148.051. (CXX) g++ options: -flto -lstdc++ -shared -lei

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.4.1Bulk Size: 500 - Inserts: 3000 - Rounds: 30ba110220330440550517.15511.781. (CXX) g++ options: -flto -lstdc++ -shared -lei

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPba5K10K15K20K25K21522.0721418.451. (CXX) g++ options: -O3 -march=native -fopenmp

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPba7K14K21K28K35K33432.6433061.221. (CXX) g++ options: -O3 -march=native -fopenmp

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Fastba90180270360450396.43396.651. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Mediumba306090120150155.27156.221. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Thoroughba51015202520.1620.301. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Exhaustiveba0.3790.7581.1371.5161.8951.67281.68441. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Very Thoroughba0.61671.23341.85012.46683.08352.72482.74101. (CXX) g++ options: -O3 -flto -pthread

GROMACS

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACSInput: water_GMX50_bareba0.38070.76141.14211.52281.90351.6791.6921. GROMACS version: 2023.3-Ubuntu_2023.3_1ubuntu3

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: DeepLab V3ba90018002700360045004287.063579.67

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: SqueezeNetba4008001200160020001860.351794.11

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception V4ba5K10K15K20K25K23265.421477.8

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: NASNet Mobileba5K10K15K20K25K21468.716936.0

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Floatba300600900120015001295.511211.48

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Quantba2004006008001000933.18823.17

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception ResNet V2ba4K8K12K16K20K20375.719530.2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Quantized COCO SSD MobileNet v1ba60012001800240030002958.482129.52

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV1ba30060090012001500129012521. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV2ba30060090012001500155914951. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Largeba400800120016002000187718101. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Smallba200400600800100010059791. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV1ba30060090012001500117411431. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV2ba30060090012001500124711901. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Largeba30060090012001500154914981. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Smallba20040060080010009319201. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: QS8MobileNetV2ba20040060080010008548441. (CXX) g++ options: -O3 -lrt -lm

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: BMW27 - Compute: CPU-Onlyba122436486053.7553.55

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Junkshop - Compute: CPU-Onlyba163248648074.2673.56

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Classroom - Compute: CPU-Onlyba306090120150144.41143.36

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Fishy Cat - Compute: CPU-Onlyba163248648071.7071.35

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Barbershop - Compute: CPU-Onlyba110220330440550509.3506.2

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.3Blend File: Pabellon Barcelona - Compute: CPU-Onlyba4080120160200166.25166.12

Apache Cassandra

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 5.0Test: Writesba60K120K180K240K300K271373271333

PyPerformance

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: goba2040608010077.077.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: chaosba91827364538.738.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: floatba112233445550.150.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: nbodyba132639526559.459.0

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: pathlibba4812162014.314.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: raytraceba4080120160200177175

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: xml_etreeba81624324035.735.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: gc_collectba150300450600750681677

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: json_loadsba369121512.112.1

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: crypto_pyaesba102030405041.841.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: async_tree_ioba160320480640800759755

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: regex_compileba163248648070.269.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: python_startupba1.30282.60563.90845.21126.5145.795.77

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: asyncio_tcp_sslba150300450600750672645

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: django_templateba51015202520.820.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: asyncio_websocketsba70140210280350316315

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.11Benchmark: pickle_pure_pythonba4080120160200166165

ONNX Runtime

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Standardba306090120150134.31134.601. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standardba369121510.9611.061. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standardba20406080100103.86102.331. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standardba306090120150156.83156.451. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Standardba4812162015.3215.591. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standardba140280420560700629.66636.321. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standardba0.72381.44762.17142.89523.6193.17053.21671. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardba102030405041.9642.451. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standardba80160240320400386.58390.601. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standardba306090120150141.00141.121. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standardba0.34690.69381.04071.38761.73451.521091.541961. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standardba112233445547.7347.071. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-base.en - Input: 2016 State of the Unionba2040608010087.2787.491. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-small.en - Input: 2016 State of the Unionba50100150200250240.60245.081. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-medium.en - Input: 2016 State of the Unionba150300450600750703.22700.911. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Tinyba102030405042.2041.71

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Smallba4080120160200192.68195.42

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Mediumba120240360480600532.81534.92

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128ba2468106.856.881. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512ba2040608010075.9670.761. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024ba163248648066.0070.851. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048ba142842567059.8463.091. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128ba2468107.197.241. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 512ba153045607561.6268.401. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 1024ba153045607568.8069.261. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 2048ba153045607565.2762.971. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 128ba112233445546.2847.721. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 512ba70140210280350324.21327.301. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 1024ba80160240320400328.47355.091. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b4154Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 2048ba60120180240300285.71279.041. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -fopenmp -march=native -mtune=native -lopenblas

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 16ba51015202518.3219.03

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 128ba51015202519.8120.13

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 256ba900180027003600450040964096

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 512ba2K4K6K8K10K81928192

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 16ba61218243024.6924.59

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 1024ba4K8K12K16K20K1638416384

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 2048ba7K14K21K28K35K3276832768

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 128ba61218243025.8326.28

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16ba369121510.2610.22

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 256ba900180027003600450040964096

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 512ba2K4K6K8K10K81928192

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 128ba369121510.6410.47

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 16ba0.40280.80561.20841.61122.0141.791.78

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 1024ba4K8K12K16K20K1638416384

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 2048ba7K14K21K28K35K3276832768

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 128ba0.450.91.351.82.252.001.99

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 256ba900180027003600450040964096

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 512ba2K4K6K8K10K81928192

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 1024ba4K8K12K16K20K1638416384

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 2048ba7K14K21K28K35K3276832768

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 256ba3006009001200150015361536

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 512ba700140021002800350030723072

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 1024ba1300260039005200650061446144

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.16Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 2048ba3K6K9K12K15K1228812288

OpenVINO GenAI

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Gemma-7b-int4-ov - Device: CPUba36912159.919.83

Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU

a: The test quit with a non-zero exit status. E: RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:

b: The test quit with a non-zero exit status. E: RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Falcon-7b-instruct-int4-ov - Device: CPUba369121512.9712.93

OpenBenchmarking.orgtokens/s, More Is BetterOpenVINO GenAI 2024.5Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPUba51015202519.2019.28

195 Results Shown

Etcpak
CP2K Molecular Dynamics:
  H20-64
  H20-256
  Fayalite-FIST
NAMD:
  ATPase with 327,506 Atoms
  STMV with 1,066,628 Atoms
QuantLib:
  S
  XXS
RELION
simdjson:
  Kostya
  TopTweet
  LargeRand
  PartialTweets
  DistinctUserID
Renaissance:
  Scala Dotty
  Rand Forest
  ALS Movie Lens
  Apache Spark Bayes
  Savina Reactors.IO
  Apache Spark PageRank
  Finagle HTTP Requests
  Gaussian Mixture Model
  In-Memory Database Shootout
  Akka Unbalanced Cobwebbed Tree
  Genetic Algorithm Using Jenetics + Futures
BYTE Unix Benchmark:
  Pipe
  Dhrystone 2
  System Call
  Whetstone Double
SVT-AV1:
  Preset 3 - Bosphorus 4K
  Preset 5 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
  Preset 3 - Bosphorus 1080p
  Preset 5 - Bosphorus 1080p
  Preset 8 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p
  Preset 3 - Beauty 4K 10-bit
  Preset 5 - Beauty 4K 10-bit
  Preset 8 - Beauty 4K 10-bit
  Preset 13 - Beauty 4K 10-bit
x265:
  Bosphorus 4K
  Bosphorus 1080p
ACES DGEMM
OSPRay:
  particle_volume/ao/real_time
  particle_volume/scivis/real_time
  particle_volume/pathtracer/real_time
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/pathtracer/real_time
7-Zip Compression:
  Compression Rating
  Decompression Rating
Stockfish
Stockfish
Build2
Primesieve:
  1e12
  1e13
Y-Cruncher:
  1B
  500M
POV-Ray
oneDNN:
  IP Shapes 1D - CPU
  IP Shapes 3D - CPU
  Convolution Batch Shapes Auto - CPU
  Deconvolution Batch shapes_1d - CPU
  Deconvolution Batch shapes_3d - CPU
  Recurrent Neural Network Training - CPU
  Recurrent Neural Network Inference - CPU
Numpy Benchmark
Timed Eigen Compilation
Gcrypt Library
Rustls:
  handshake - TLS13_CHACHA20_POLY1305_SHA256
  handshake - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  handshake-resume - TLS13_CHACHA20_POLY1305_SHA256
  handshake-ticket - TLS13_CHACHA20_POLY1305_SHA256
  handshake - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  handshake-resume - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  handshake-ticket - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  handshake-resume - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  handshake-ticket - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
OpenSSL:
  ChaCha20
  AES-128-GCM
  AES-256-GCM
  ChaCha20-Poly1305
Apache CouchDB:
  100 - 1000 - 30
  100 - 3000 - 30
  300 - 1000 - 30
  300 - 3000 - 30
  500 - 1000 - 30
  500 - 3000 - 30
FinanceBench:
  Repo OpenMP
  Bonds OpenMP
ASTC Encoder:
  Fast
  Medium
  Thorough
  Exhaustive
  Very Thorough
GROMACS
LiteRT:
  DeepLab V3
  SqueezeNet
  Inception V4
  NASNet Mobile
  Mobilenet Float
  Mobilenet Quant
  Inception ResNet V2
  Quantized COCO SSD MobileNet v1
XNNPACK:
  FP32MobileNetV1
  FP32MobileNetV2
  FP32MobileNetV3Large
  FP32MobileNetV3Small
  FP16MobileNetV1
  FP16MobileNetV2
  FP16MobileNetV3Large
  FP16MobileNetV3Small
  QS8MobileNetV2
Blender:
  BMW27 - CPU-Only
  Junkshop - CPU-Only
  Classroom - CPU-Only
  Fishy Cat - CPU-Only
  Barbershop - CPU-Only
  Pabellon Barcelona - CPU-Only
Apache Cassandra
PyPerformance:
  go
  chaos
  float
  nbody
  pathlib
  raytrace
  xml_etree
  gc_collect
  json_loads
  crypto_pyaes
  async_tree_io
  regex_compile
  python_startup
  asyncio_tcp_ssl
  django_template
  asyncio_websockets
  pickle_pure_python
ONNX Runtime:
  GPT-2 - CPU - Standard
  yolov4 - CPU - Standard
  ZFNet-512 - CPU - Standard
  T5 Encoder - CPU - Standard
  bertsquad-12 - CPU - Standard
  CaffeNet 12-int8 - CPU - Standard
  fcn-resnet101-11 - CPU - Standard
  ArcFace ResNet-100 - CPU - Standard
  ResNet50 v1-12-int8 - CPU - Standard
  super-resolution-10 - CPU - Standard
  ResNet101_DUC_HDC-12 - CPU - Standard
  Faster R-CNN R-50-FPN-int8 - CPU - Standard
Whisper.cpp:
  ggml-base.en - 2016 State of the Union
  ggml-small.en - 2016 State of the Union
  ggml-medium.en - 2016 State of the Union
Whisperfile:
  Tiny
  Small
  Medium
Llama.cpp:
  CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Text Generation 128
  CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 512
  CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 1024
  CPU BLAS - Llama-3.1-Tulu-3-8B-Q8_0 - Prompt Processing 2048
  CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Text Generation 128
  CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 512
  CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 1024
  CPU BLAS - Mistral-7B-Instruct-v0.3-Q8_0 - Prompt Processing 2048
  CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Text Generation 128
  CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 512
  CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 1024
  CPU BLAS - granite-3.0-3b-a800m-instruct-Q8_0 - Prompt Processing 2048
Llamafile:
  Llama-3.2-3B-Instruct.Q6_K - Text Generation 16
  Llama-3.2-3B-Instruct.Q6_K - Text Generation 128
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 256
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 512
  TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 16
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 1024
  Llama-3.2-3B-Instruct.Q6_K - Prompt Processing 2048
  TinyLlama-1.1B-Chat-v1.0.BF16 - Text Generation 128
  mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 16
  TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 256
  TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 512
  mistral-7b-instruct-v0.2.Q5_K_M - Text Generation 128
  wizardcoder-python-34b-v1.0.Q6_K - Text Generation 16
  TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 1024
  TinyLlama-1.1B-Chat-v1.0.BF16 - Prompt Processing 2048
  wizardcoder-python-34b-v1.0.Q6_K - Text Generation 128
  mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 256
  mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 512
  mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 1024
  mistral-7b-instruct-v0.2.Q5_K_M - Prompt Processing 2048
  wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 256
  wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 512
  wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 1024
  wizardcoder-python-34b-v1.0.Q6_K - Prompt Processing 2048
OpenVINO GenAI:
  Gemma-7b-int4-ov - CPU
  Falcon-7b-instruct-int4-ov - CPU
  Phi-3-mini-128k-instruct-int4-ov - CPU