old epyc ai Tests for a future article. AMD EPYC 7551 32-Core testing with a GIGABYTE MZ31-AR0-00 v01010101 (F10 BIOS) and ASPEED on Debian 12 via the Phoronix Test Suite. a: Processor: AMD EPYC 7551 32-Core @ 2.00GHz (32 Cores / 64 Threads), Motherboard: GIGABYTE MZ31-AR0-00 v01010101 (F10 BIOS), Chipset: AMD 17h, Memory: 8 x 4 GB DDR4-2133MT/s 9ASF51272PZ-2G6E1, Disk: Samsung SSD 960 EVO 500GB + 31GB SanDisk 3.2Gen1, Graphics: ASPEED, Network: Realtek RTL8111/8168/8411 + 2 x Broadcom NetXtreme II BCM57810 10 OS: Debian 12, Kernel: 6.1.0-10-amd64 (x86_64), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1024x768 b: Processor: AMD EPYC 7551 32-Core @ 2.00GHz (32 Cores / 64 Threads), Motherboard: GIGABYTE MZ31-AR0-00 v01010101 (F10 BIOS), Chipset: AMD 17h, Memory: 8 x 4 GB DDR4-2133MT/s 9ASF51272PZ-2G6E1, Disk: Samsung SSD 960 EVO 500GB + 31GB SanDisk 3.2Gen1, Graphics: ASPEED, Network: Realtek RTL8111/8168/8411 + 2 x Broadcom NetXtreme II BCM57810 10 OS: Debian 12, Kernel: 6.1.0-10-amd64 (x86_64), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1024x768 c: Processor: AMD EPYC 7551 32-Core @ 2.00GHz (32 Cores / 64 Threads), Motherboard: GIGABYTE MZ31-AR0-00 v01010101 (F10 BIOS), Chipset: AMD 17h, Memory: 8 x 4 GB DDR4-2133MT/s 9ASF51272PZ-2G6E1, Disk: Samsung SSD 960 EVO 500GB + 31GB SanDisk 3.2Gen1, Graphics: ASPEED, Network: Realtek RTL8111/8168/8411 + 2 x Broadcom NetXtreme II BCM57810 10 OS: Debian 12, Kernel: 6.1.0-10-amd64 (x86_64), Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1024x768 Llama.cpp b3067 Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf Tokens Per Second > Higher Is Better a . 1.74 |================================================ c . 2.51 |===================================================================== Whisper.cpp 1.6.2 Model: ggml-base.en - Input: 2016 State of the Union Seconds < Lower Is Better a . 359.63 |====================================================== b . 445.02 |=================================================================== Llamafile 0.8.6 Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU Tokens Per Second > Higher Is Better a . 1.11 |============================================================ c . 1.27 |===================================================================== Llamafile 0.8.6 Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: CPU Tokens Per Second > Higher Is Better a . 5.75 |===================================================================== c . 5.30 |================================================================ Whisper.cpp 1.6.2 Model: ggml-small.en - Input: 2016 State of the Union Seconds < Lower Is Better a . 1415.46 |================================================================= b . 1433.91 |================================================================== Llamafile 0.8.6 Test: TinyLlama-1.1B-Chat-v1.0.BF16 - Acceleration: CPU Tokens Per Second > Higher Is Better a . 12.71 |=================================================================== c . 12.81 |==================================================================== Whisper.cpp 1.6.2 Model: ggml-medium.en - Input: 2016 State of the Union Seconds < Lower Is Better a . 3286.61 |================================================================== Llamafile 0.8.6 Test: Meta-Llama-3-8B-Instruct.F16 - Acceleration: CPU Tokens Per Second > Higher Is Better Llamafile 0.8.6 Test: llava-v1.6-mistral-7b.Q8_0 - Acceleration: CPU Tokens Per Second > Higher Is Better