i5-10400F-2 Intel Core i5-10400 testing with a Gigabyte B460MDS3H B460M DS3H (F5 BIOS) and Intel DG2 [Arc A750] on Ubuntu 24.04 via the Phoronix Test Suite. run2: Processor: Intel Core i5-10400 @ 4.30GHz (6 Cores / 12 Threads), Motherboard: Gigabyte B460MDS3H B460M DS3H (F5 BIOS), Chipset: Intel Comet Lake-S 6c, Memory: 4 x 8GB DDR4-2666MT/s DDR4 3000, Disk: 512GB SPCC M.2 PCIe SSD + 512GB SPCC Solid State + 512GB Fanxiang S101 51 + 128GB nal USB 3.0 + 63GB ProductCode + 1000GB 2105, Graphics: Intel DG2 [Arc A750], Audio: Realtek ALC887-VD, Monitor: XV272U V, Network: Realtek RTL8111/8168/8211/8411 OS: Ubuntu 24.04, Kernel: 6.8.0-50-generic (x86_64), Desktop: GNOME Shell 46.0, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 24.3.0.20240801-2119~24.04 (git-9fc8668b66), OpenCL: OpenCL 3.0, Compiler: GCC 13.3.0 + Clang 18.1.3, File-System: ext4, Screen Resolution: 4480x1440 Unigine Tropics 1.3 Resolution: 1920 x 1080 - Mode: Windowed Frames Per Second > Higher Is Better run2 . 204.76 |================================================================ perf-bench 6.9 Benchmark: Epoll Wait ops/sec > Higher Is Better run2 . 150422 |================================================================ perf-bench 6.9 Benchmark: Futex Hash ops/sec > Higher Is Better run2 . 3466352 |=============================================================== perf-bench 6.9 Benchmark: Memcpy 1MB GB/sec > Higher Is Better run2 . 30.78 |================================================================= perf-bench 6.9 Benchmark: Memset 1MB GB/sec > Higher Is Better run2 . 47.54 |================================================================= perf-bench 6.9 Benchmark: Sched Pipe ops/sec > Higher Is Better run2 . 208735 |================================================================ perf-bench 6.9 Benchmark: Futex Lock-Pi ops/sec > Higher Is Better run2 . 1432 |================================================================== perf-bench 6.9 Benchmark: Syscall Basic ops/sec > Higher Is Better run2 . 10113978 |============================================================== IPC_benchmark Type: TCP Socket - Message Bytes: 1024 Messages Per Second > Higher Is Better run2 . 1997454 |=============================================================== IPC_benchmark Type: Unnamed Pipe - Message Bytes: 1024 Messages Per Second > Higher Is Better run2 . 2521966 |=============================================================== IPC_benchmark Type: FIFO Named Pipe - Message Bytes: 1024 Messages Per Second > Higher Is Better run2 . 2829521 |=============================================================== IPC_benchmark Type: Unnamed Unix Domain Socket - Message Bytes: 1024 Messages Per Second > Higher Is Better run2 . 1380194 |=============================================================== Java SciMark 2.2 Computational Test: Composite Mflops > Higher Is Better run2 . 2691.76 |=============================================================== Java SciMark 2.2 Computational Test: Monte Carlo Mflops > Higher Is Better run2 . 1004.47 |=============================================================== Java SciMark 2.2 Computational Test: Fast Fourier Transform Mflops > Higher Is Better run2 . 472.47 |================================================================ Java SciMark 2.2 Computational Test: Sparse Matrix Multiply Mflops > Higher Is Better run2 . 1820.82 |=============================================================== Java SciMark 2.2 Computational Test: Dense LU Matrix Factorization Mflops > Higher Is Better run2 . 8720.54 |=============================================================== Java SciMark 2.2 Computational Test: Jacobi Successive Over-Relaxation Mflops > Higher Is Better run2 . 1440.50 |=============================================================== LuaRadio 0.9.1 MiB/s > Higher Is Better Crafty 25.2 Elapsed Time Nodes Per Second > Higher Is Better run2 . 7934108 |=============================================================== AOM AV1 3.9 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K Frames Per Second > Higher Is Better run2 . 37.29 |================================================================= Intel Open Image Denoise 2.3 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only Images / Sec > Higher Is Better run2 . 0.26 |================================================================== Intel Open Image Denoise 2.3 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only Images / Sec > Higher Is Better run2 . 0.26 |================================================================== Intel Open Image Denoise 2.3 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only Images / Sec > Higher Is Better run2 . 0.13 |================================================================== Intel Open Image Denoise 2.3 Run: RT.hdr_alb_nrm.3840x2160 - Device: Intel oneAPI SYCL Images / Sec > Higher Is Better run2 . 22.21 |================================================================= Intel Open Image Denoise 2.3 Run: RT.ldr_alb_nrm.3840x2160 - Device: Intel oneAPI SYCL Images / Sec > Higher Is Better run2 . 22.45 |================================================================= Intel Open Image Denoise 2.3 Run: RTLightmap.hdr.4096x4096 - Device: Intel oneAPI SYCL Images / Sec > Higher Is Better run2 . 10.65 |================================================================= OSPRay 3.2 Benchmark: particle_volume/ao/real_time Items Per Second > Higher Is Better run2 . 1.94675 |=============================================================== OSPRay 3.2 Benchmark: particle_volume/scivis/real_time Items Per Second > Higher Is Better run2 . 1.92649 |=============================================================== OSPRay 3.2 Benchmark: particle_volume/pathtracer/real_time Items Per Second > Higher Is Better run2 . 67.46 |================================================================= OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/ao/real_time Items Per Second > Higher Is Better run2 . 1.24491 |=============================================================== OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time Items Per Second > Higher Is Better run2 . 1.20018 |=============================================================== OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time Items Per Second > Higher Is Better run2 . 1.57327 |=============================================================== 7-Zip Compression 24.05 Test: Compression Rating MIPS > Higher Is Better run2 . 47318 |================================================================= 7-Zip Compression 24.05 Test: Decompression Rating MIPS > Higher Is Better run2 . 34930 |================================================================= Stockfish 17 Chess Benchmark Nodes Per Second > Higher Is Better run2 . 9534530 |=============================================================== Timed FFmpeg Compilation 7.0 Time To Compile Seconds < Lower Is Better run2 . 115.34 |================================================================ Timed Godot Game Engine Compilation 4.0 Time To Compile Seconds < Lower Is Better run2 . 688.00 |================================================================ C-Ray 2.0 Resolution: 4K - Rays Per Pixel: 16 Seconds < Lower Is Better run2 . 527.57 |================================================================ POV-Ray 3.7.0.7 Trace Time Seconds < Lower Is Better run2 . 75.79 |================================================================= Rust Prime Benchmark Prime Number Test To 200,000,000 Seconds < Lower Is Better run2 . 21.55 |================================================================= Numpy Benchmark Score > Higher Is Better run2 . 389.39 |================================================================ LAME MP3 Encoding 3.100 WAV To MP3 Seconds < Lower Is Better run2 . 8.706 |================================================================= R Benchmark Seconds < Lower Is Better OpenSSL 3.3 Algorithm: SHA256 byte/s > Higher Is Better run2 . 1973920977 |============================================================ OpenSSL 3.3 Algorithm: SHA512 byte/s > Higher Is Better run2 . 2142863600 |============================================================ OpenSSL 3.3 Algorithm: RSA4096 sign/s > Higher Is Better run2 . 1769.2 |================================================================ OpenSSL 3.3 Algorithm: RSA4096 verify/s > Higher Is Better run2 . 114434.8 |============================================================== OpenSSL 3.3 Algorithm: ChaCha20 byte/s > Higher Is Better run2 . 22684217250 |=========================================================== OpenSSL 3.3 Algorithm: AES-128-GCM byte/s > Higher Is Better run2 . 34587610047 |=========================================================== OpenSSL 3.3 Algorithm: AES-256-GCM byte/s > Higher Is Better run2 . 25907802110 |=========================================================== OpenSSL 3.3 Algorithm: ChaCha20-Poly1305 byte/s > Higher Is Better run2 . 15433989780 |=========================================================== MariaDB 11.5 Test: oltp_read_only - Threads: 512 Queries Per Second > Higher Is Better MariaDB 11.5 Test: oltp_read_write - Threads: 512 Queries Per Second > Higher Is Better MariaDB 11.5 Test: oltp_write_only - Threads: 512 Queries Per Second > Higher Is Better MariaDB 11.5 Test: oltp_point_select - Threads: 512 Queries Per Second > Higher Is Better MariaDB 11.5 Test: oltp_update_index - Threads: 512 Queries Per Second > Higher Is Better MariaDB 11.5 Test: oltp_update_non_index - Threads: 512 Queries Per Second > Higher Is Better GIMP 2.10.36 Test: resize Seconds < Lower Is Better run2 . 14.87 |================================================================= GIMP 2.10.36 Test: rotate Seconds < Lower Is Better run2 . 15.41 |================================================================= GIMP 2.10.36 Test: auto-levels Seconds < Lower Is Better run2 . 15.79 |================================================================= GIMP 2.10.36 Test: unsharp-mask Seconds < Lower Is Better run2 . 17.83 |================================================================= Blender 4.3 Blend File: BMW27 - Compute: CPU-Only Seconds < Lower Is Better run2 . 257.39 |================================================================ Blender 4.3 Blend File: BMW27 - Compute: Intel oneAPI Seconds < Lower Is Better Blender 4.3 Blend File: Classroom - Compute: CPU-Only Seconds < Lower Is Better run2 . 764.78 |================================================================ Blender 4.3 Blend File: Fishy Cat - Compute: CPU-Only Seconds < Lower Is Better run2 . 357.31 |================================================================ Blender 4.3 Blend File: Classroom - Compute: Intel oneAPI Seconds < Lower Is Better Blender 4.3 Blend File: Fishy Cat - Compute: Intel oneAPI Seconds < Lower Is Better ONNX Runtime 1.19 Model: GPT-2 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better run2 . 3.92140 |=============================================================== ONNX Runtime 1.19 Model: yolov4 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better run2 . 255.06 |================================================================ ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better run2 . 42.95 |================================================================= ONNX Runtime 1.19 Model: ZFNet-512 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better run2 . 23.28 |================================================================= ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better run2 . 93.00 |================================================================= ONNX Runtime 1.19 Model: T5 Encoder - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better run2 . 10.75 |================================================================= ONNX Runtime 1.19 Model: bertsquad-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better run2 . 214.64 |================================================================ ONNX Runtime 1.19 Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better run2 . 4.65766 |=============================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better run2 . 0.617991 |============================================================== ONNX Runtime 1.19 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better run2 . 1619.25 |=============================================================== ONNX Runtime 1.19 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better run2 . 85.38 |================================================================= ONNX Runtime 1.19 Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better run2 . 11.71 |================================================================= ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better run2 . 41.80 |================================================================= ONNX Runtime 1.19 Model: super-resolution-10 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better run2 . 23.92 |================================================================= ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better run2 . 0.279928 |============================================================== ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel Inference Time Cost (ms) < Lower Is Better run2 . 3575.63 |=============================================================== ONNX Runtime 1.19 Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel Inferences Per Second > Higher Is Better Selenium Benchmark: Kraken - Browser: Firefox Score > Higher Is Better Selenium Benchmark: Octane - Browser: Firefox Score > Higher Is Better Selenium Benchmark: WebXPRT - Browser: Firefox Score > Higher Is Better Selenium Benchmark: Speedometer - Browser: Firefox Score > Higher Is Better OpenVINO GenAI 2024.5 Model: Gemma-7b-int4-ov - Device: CPU tokens/s > Higher Is Better run2 . 4.70 |================================================================== OpenVINO GenAI 2024.5 Model: Gemma-7b-int4-ov - Device: CPU - Time To First Token ms < Lower Is Better run2 . 1174.99 |=============================================================== OpenVINO GenAI 2024.5 Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output Token ms < Lower Is Better run2 . 212.47 |================================================================ OpenVINO GenAI 2024.5 Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU tokens/s > Higher Is Better run2 . 15.79 |================================================================= OpenVINO GenAI 2024.5 Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU - Time To First Token ms < Lower Is Better run2 . 97.93 |================================================================= OpenVINO GenAI 2024.5 Model: TinyLlama-1.1B-Chat-v1.0 - Device: CPU - Time Per Output Token ms < Lower Is Better run2 . 63.34 |================================================================= OpenVINO GenAI 2024.5 Model: Falcon-7b-instruct-int4-ov - Device: CPU tokens/s > Higher Is Better run2 . 7.05 |================================================================== OpenVINO GenAI 2024.5 Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First Token ms < Lower Is Better run2 . 938.76 |================================================================ OpenVINO GenAI 2024.5 Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output Token ms < Lower Is Better run2 . 141.91 |================================================================ OpenVINO GenAI 2024.5 Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU tokens/s > Higher Is Better run2 . 10.01 |================================================================= OpenVINO GenAI 2024.5 Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First Token ms < Lower Is Better run2 . 297.88 |================================================================ OpenVINO GenAI 2024.5 Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output Token ms < Lower Is Better run2 . 99.95 |=================================================================