Tests
Suites
Latest Results
Search
Register
Login
Popular Tests
Timed Linux Kernel Compilation
SVT-AV1
7-Zip Compression
Stockfish
FFmpeg
Blender
Newest Tests
Rustls
LiteRT
WarpX
Epoch
Valkey
Whisperfile
Recently Updated Tests
Blender
vkpeak
ProjectPhysX OpenCL-Benchmark
FluidX3D
Mobile Neural Network
ACES DGEMM
New & Recently Updated Tests
Recently Updated Suites
Database Test Suite
Machine Learning
Steam
New & Recently Updated Suites
Component Benchmarks
CPUs / Processors
GPUs / Graphics
OpenGL
Disks / Storage
Motherboards
File-Systems
Operating Systems
OpenBenchmarking.org
Corporate / Organization Info
Bug Reports / Feature Requests
Llama.cpp 1.0.0
pts/llama-cpp-1.0.0
- 10 January 2024 -
Initial commit of llama.cpp CPU benchmark.
downloads.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <Downloads> <Package> <URL>https://github.com/ggerganov/llama.cpp/archive/refs/tags/b1808.tar.gz</URL> <MD5>3a659d857520ca4d95f81f1fd03c4bca</MD5> <SHA256>f17b18178d7174d9659df74e5832285ccf5dad0c759627500f35aab0aa5ca14f</SHA256> <FileName>llama.cpp-b1808.tar.gz</FileName> <FileSize>7896552</FileSize> </Package> <Package> <URL>https://huggingface.co/TheBloke/Llama-2-13B-GGUF/resolve/50b3e202c3df49b96551490582cd234472c5eb23/llama-2-13b.Q4_0.gguf</URL> <MD5>5bd7db6ab9d3b2c312e46738e89090b3</MD5> <SHA256>283fc12bcea5638fc82bcd038f0d19eaf0ebdc7fec9e4536d3e8063c1cc84548</SHA256> <FileName>llama-2-13b.Q4_0.gguf</FileName> <FileSize>7365834624</FileSize> <Optional>TRUE</Optional> </Package> <Package> <URL>https://huggingface.co/TheBloke/Llama-2-70B-Chat-GGUF/resolve/96fbf8c3617a084d06d6947e98e0194aa818e5e8/llama-2-70b-chat.Q5_0.gguf</URL> <MD5>49dd1daed8811dc92c60952954d1e4a9</MD5> <SHA256>65a8fda53f4c7c6470fd3a4b8e798b80e8b1b7d6be34cd6b521a9a251eeb9c1a</SHA256> <FileName>llama-2-70b-chat.Q5_0.gguf</FileName> <FileSize>47461397408</FileSize> <Optional>TRUE</Optional> </Package> <Package> <URL>https://huggingface.co/TheBloke/Llama-2-7B-GGUF/resolve/b4e04e128f421c93a5f1e34ac4d7ca9b0af47b80/llama-2-7b.Q4_0.gguf</URL> <MD5>4344ea31374d6fd63023ee035c9f95ae</MD5> <SHA256>78b8f9777dd620ad29cd2cffb6653b17fa8a5b1fddc1b8821180d60eedd24d48</SHA256> <FileName>llama-2-7b.Q4_0.gguf</FileName> <FileSize>3825807040</FileSize> <Optional>TRUE</Optional> </Package> </Downloads> </PhoronixTestSuite>
install.sh
#!/bin/bash tar -xf llama.cpp-b1808.tar.gz cd llama.cpp-b1808 make -j LLAMA_OPENBLAS=1 echo $? > ~/install-exit-status echo "#!/bin/sh cd llama.cpp-b1808 ./main \$@ -p \"Building a website can be done in 10 simple steps:\" -n 512 -e -t \$NUM_CPU_PHYSICAL_CORES > \$LOG_FILE 2>&1 echo \$? > ~/test-exit-status" > ~/llama-cpp chmod +x ~/llama-cpp
results-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <ResultsParser> <OutputTemplate>llama_print_timings: eval time = 18329.86 ms / 399 runs ( 45.94 ms per token, #_RESULT_# tokens per second)</OutputTemplate> <ResultBeforeString>tokens</ResultBeforeString> </ResultsParser> </PhoronixTestSuite>
test-definition.xml
<?xml version="1.0"?> <!--Phoronix Test Suite v10.8.4--> <PhoronixTestSuite> <TestInformation> <Title>Llama.cpp</Title> <AppVersion>b1808</AppVersion> <Description>Llama.cpp is a port of Facebook's LLaMA model in C/C++ developed by Georgi Gerganov. Llama.cpp allows the inference of LLaMA and other supported models in C/C++. For CPU inference Llama.cpp supports AVX2/AVX-512, ARM NEON, and other modern ISAs along with features like OpenBLAS usage.</Description> <ResultScale>Tokens Per Second</ResultScale> <Proportion>HIB</Proportion> <TimesToRun>3</TimesToRun> </TestInformation> <TestProfile> <Version>1.0.0</Version> <SupportedPlatforms>Linux</SupportedPlatforms> <SoftwareType>Utility</SoftwareType> <TestType>System</TestType> <License>Free</License> <ExternalDependencies>build-utilities, blas-development</ExternalDependencies> <InstallRequiresInternet>TRUE</InstallRequiresInternet> <EnvironmentSize>58700</EnvironmentSize> <ProjectURL>https://github.com/ggerganov/llama.cpp/</ProjectURL> <RepositoryURL>https://github.com/ggerganov/llama.cpp</RepositoryURL> <Maintainer>Michael Larabel</Maintainer> <SystemDependencies>pkgconf</SystemDependencies> </TestProfile> <TestSettings> <Option> <DisplayName>Model</DisplayName> <Identifier>model</Identifier> <ArgumentPrefix>-m ../</ArgumentPrefix> <Menu> <Entry> <Name>llama-2-7b.Q4_0.gguf</Name> <Value>llama-2-7b.Q4_0.gguf</Value> </Entry> <Entry> <Name>llama-2-13b.Q4_0.gguf</Name> <Value>llama-2-13b.Q4_0.gguf</Value> </Entry> <Entry> <Name>llama-2-70b-chat.Q5_0.gguf</Name> <Value>llama-2-70b-chat.Q5_0.gguf</Value> </Entry> </Menu> </Option> </TestSettings> </PhoronixTestSuite>