kdkdk

AMD Ryzen 9 7950X 16-Core testing with a ASUS ROG STRIX X670E-E GAMING WIFI (1905 BIOS) and AMD Radeon RX 7900 GRE 16GB on Ubuntu 24.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2406020-PTS-KDKDK07936&sor&grr.

kdkdkProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionabcdeAMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads)ASUS ROG STRIX X670E-E GAMING WIFI (1905 BIOS)AMD Device 14d82 x 16GB DDR5-6000MT/s Crucial CP16G60C36U5W.M8D1Western Digital WD_BLACK SN850X 2000GB + 4001GB Western Digital WD_BLACK SN850X 4000GBAMD Radeon RX 7900 GRE 16GB (2200/3000MHz)AMD Navi 31 HDMI/DPDELL U2723QEIntel I225-V + Intel Wi-Fi 6EUbuntu 24.046.8.0-31-generic (x86_64)GNOME Shell 46.0X Server + Wayland4.6 Mesa 24.0.5-1ubuntu1 (LLVM 17.0.6 DRM 3.57)GCC 13.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa601206Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected

kdkdkllamafile: Meta-Llama-3-8B-Instruct.F16 - CPUwhisper-cpp: ggml-medium.en - 2016 State of the Unionwhisper-cpp: ggml-small.en - 2016 State of the Unionllamafile: wizardcoder-python-34b-v1.0.Q6_K - CPUwhisper-cpp: ggml-base.en - 2016 State of the Unionllamafile: mistral-7b-instruct-v0.2.Q5_K_M - CPUllama-cpp: Meta-Llama-3-8B-Instruct-Q8_0.ggufllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - CPUllamafile: llava-v1.6-mistral-7b.Q8_0 - CPUabcde4.42648.53817225.225912.3081.1025112.258.0830.07646.96444223.340562.3280.8137812.178.0530.194.4648.86975225.655382.381.542438.0330.124.38649.91712225.353522.2980.6778412.28.0630.23651.79719224.31122.380.686312.148.0730.14OpenBenchmarking.org

Llamafile

Test: Meta-Llama-3-8B-Instruct.F16 - Acceleration: CPU

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.6Test: Meta-Llama-3-8B-Instruct.F16 - Acceleration: CPUacd0.99451.9892.98353.9784.9725SE +/- 0.03, N = 24.424.404.38

Whisper.cpp

Model: ggml-medium.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-medium.en - Input: 2016 State of the Unionbacde140280420560700SE +/- 0.46, N = 3646.96648.54648.87649.92651.801. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Whisper.cpp

Model: ggml-small.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-small.en - Input: 2016 State of the Unionbeadc50100150200250SE +/- 0.84, N = 3223.34224.31225.23225.35225.661. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Llamafile

Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.6Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPUbecad0.5221.0441.5662.0882.61SE +/- 0.00, N = 32.322.302.302.302.29

Whisper.cpp

Model: ggml-base.en - Input: 2016 State of the Union

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-base.en - Input: 2016 State of the Uniondebac20406080100SE +/- 0.25, N = 380.6880.6980.8181.1081.541. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512dq -mavx512bw -mavx512vbmi -mavx512vnni

Llamafile

Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: CPU

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.6Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: CPUadbe3691215SE +/- 0.02, N = 312.2512.2012.1712.14

Llama.cpp

Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b3067Model: Meta-Llama-3-8B-Instruct-Q8_0.ggufaedbc246810SE +/- 0.02, N = 38.088.078.068.058.031. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

Llamafile

Test: TinyLlama-1.1B-Chat-v1.0.BF16 - Acceleration: CPU

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.6Test: TinyLlama-1.1B-Chat-v1.0.BF16 - Acceleration: CPUdbeca714212835SE +/- 0.05, N = 330.2330.1930.1430.1230.07


Phoronix Test Suite v10.8.5