ai ai ai AMD Ryzen Threadripper 7980X 64-Cores testing with a System76 Thelio Major (FA Z5 BIOS) and AMD Radeon Pro W7900 45GB on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2406028-PTS-AIAIAI0963&sor&grs .
ai ai ai Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b c d e AMD Ryzen Threadripper 7980X 64-Cores @ 7.79GHz (64 Cores / 128 Threads) System76 Thelio Major (FA Z5 BIOS) AMD Device 14a4 4 x 32GB DDR5-4800MT/s Micron MTC20F1045S1RC48BA2 1000GB CT1000T700SSD5 AMD Radeon Pro W7900 45GB AMD Device 14cc DELL P2415Q Aquantia AQC113C NBase-T/IEEE + Realtek RTL8125 2.5GbE + Intel Wi-Fi 6E Ubuntu 24.04 6.8.0-060800-generic (x86_64) GNOME Shell 46.0 X Server + Wayland 4.6 Mesa 24.0.5-1ubuntu1 (LLVM 17.0.6 DRM 3.57) GCC 13.2.0 ext4 1920x1200 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa108105 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
ai ai ai llamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - CPU whisper-cpp: ggml-base.en - 2016 State of the Union whisper-cpp: ggml-medium.en - 2016 State of the Union whisper-cpp: ggml-small.en - 2016 State of the Union llamafile: wizardcoder-python-34b-v1.0.Q6_K - CPU llamafile: mistral-7b-instruct-v0.2.Q5_K_M - CPU llama-cpp: Meta-Llama-3-8B-Instruct-Q8_0.gguf llamafile: Meta-Llama-3-8B-Instruct.F16 - CPU llamafile: llava-v1.6-mistral-7b.Q8_0 - CPU a b c d e 52.94 118.86537 558.13719 232.19527 4.02 22.18 13.09 8.65 50.74 118.64985 555.70312 232.16119 4 22.03 13.1 52.41 118.25585 559.56644 234.12406 4 22.17 13.14 118.40684 560.63738 232.77281 13.09 52.94 119.41782 559.87831 233.88327 3.99 22.08 13.08 8.63 OpenBenchmarking.org
Llamafile Test: TinyLlama-1.1B-Chat-v1.0.BF16 - Acceleration: CPU OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.6 Test: TinyLlama-1.1B-Chat-v1.0.BF16 - Acceleration: CPU e a c b 12 24 36 48 60 SE +/- 0.09, N = 3 52.94 52.94 52.41 50.74
Whisper.cpp Model: ggml-base.en - Input: 2016 State of the Union OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.6.2 Model: ggml-base.en - Input: 2016 State of the Union c d b a e 30 60 90 120 150 SE +/- 0.32, N = 3 118.26 118.41 118.65 118.87 119.42 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni
Whisper.cpp Model: ggml-medium.en - Input: 2016 State of the Union OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.6.2 Model: ggml-medium.en - Input: 2016 State of the Union b a c e d 120 240 360 480 600 SE +/- 0.34, N = 3 555.70 558.14 559.57 559.88 560.64 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni
Whisper.cpp Model: ggml-small.en - Input: 2016 State of the Union OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.6.2 Model: ggml-small.en - Input: 2016 State of the Union b a d e c 50 100 150 200 250 SE +/- 0.24, N = 3 232.16 232.20 232.77 233.88 234.12 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni
Llamafile Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.6 Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU a c b e 0.9045 1.809 2.7135 3.618 4.5225 SE +/- 0.01, N = 3 4.02 4.00 4.00 3.99
Llamafile Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: CPU OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.6 Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: CPU a c e b 5 10 15 20 25 SE +/- 0.06, N = 3 22.18 22.17 22.08 22.03
Llama.cpp Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf OpenBenchmarking.org Tokens Per Second, More Is Better Llama.cpp b3067 Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf c b d a e 3 6 9 12 15 SE +/- 0.01, N = 3 13.14 13.10 13.09 13.09 13.08 1. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas
Llamafile Test: Meta-Llama-3-8B-Instruct.F16 - Acceleration: CPU OpenBenchmarking.org Tokens Per Second, More Is Better Llamafile 0.8.6 Test: Meta-Llama-3-8B-Instruct.F16 - Acceleration: CPU a e 2 4 6 8 10 8.65 8.63
Phoronix Test Suite v10.8.5