old epyc ai

AMD EPYC 7551 32-Core testing with a GIGABYTE MZ31-AR0-00 v01010101 (F10 BIOS) and ASPEED on Debian 12 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2406027-NE-OLDEPYCAI23
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

HPC - High Performance Computing 3 Tests
Large Language Models 2 Tests
Machine Learning 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
June 02
  1 Hour, 48 Minutes
b
June 02
  32 Minutes
c
June 02
  14 Minutes
Invert Hiding All Results Option
  51 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


old epyc aiOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7551 32-Core @ 2.00GHz (32 Cores / 64 Threads)GIGABYTE MZ31-AR0-00 v01010101 (F10 BIOS)AMD 17h8 x 4 GB DDR4-2133MT/s 9ASF51272PZ-2G6E1Samsung SSD 960 EVO 500GB + 31GB SanDisk 3.2Gen1ASPEEDRealtek RTL8111/8168/8411 + 2 x Broadcom NetXtreme II BCM57810 10Debian 126.1.0-10-amd64 (x86_64)GCC 12.2.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen ResolutionOld Epyc Ai BenchmarksSystem Logs- Transparent Huge Pages: always- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-bTRWOB/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001227- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

old epyc aillama-cpp: Meta-Llama-3-8B-Instruct-Q8_0.ggufwhisper-cpp: ggml-base.en - 2016 State of the Unionllamafile: wizardcoder-python-34b-v1.0.Q6_K - CPUllamafile: mistral-7b-instruct-v0.2.Q5_K_M - CPUwhisper-cpp: ggml-small.en - 2016 State of the Unionllamafile: TinyLlama-1.1B-Chat-v1.0.BF16 - CPUwhisper-cpp: ggml-medium.en - 2016 State of the Unionllamafile: llava-v1.6-mistral-7b.Q8_0 - CPUabc1.74359.6311.115.751415.4558812.713286.60625445.018031433.908882.511.275.312.81OpenBenchmarking.org

Llama.cpp

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b3067Model: Meta-Llama-3-8B-Instruct-Q8_0.ggufac0.56481.12961.69442.25922.8241.742.511. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-base.en - Input: 2016 State of the Unionba100200300400500445.02359.631. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.6Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPUac0.28580.57160.85741.14321.4291.111.27

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.6Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: CPUca1.29382.58763.88145.17526.4695.305.75

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-small.en - Input: 2016 State of the Unionba300600900120015001433.911415.461. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2

Llamafile

OpenBenchmarking.orgTokens Per Second, More Is BetterLlamafile 0.8.6Test: TinyLlama-1.1B-Chat-v1.0.BF16 - Acceleration: CPUac369121512.7112.81

Whisper.cpp

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.6.2Model: ggml-medium.en - Input: 2016 State of the Uniona70014002100280035003286.611. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread -msse3 -mssse3 -mavx -mf16c -mfma -mavx2

Llamafile

Test: Meta-Llama-3-8B-Instruct.F16 - Acceleration: CPU

a: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

Test: llava-v1.6-mistral-7b.Q8_0 - Acceleration: CPU

a: The test quit with a non-zero exit status. E: ./run-llava: line 2: ./llava-v1.6-mistral-7b.Q8_0.llamafile.86: No such file or directory

c: The test quit with a non-zero exit status. E: ./run-llava: line 2: ./llava-v1.6-mistral-7b.Q8_0.llamafile.86: No such file or directory