From: Mathieu Baudier Date: Sun, 23 Feb 2025 11:16:31 +0000 (+0100) Subject: Improve Debian packaging documentation X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=3d5c1cadb09d1ecbd61a574b3f7661c604882fc9;p=pkg%2Fggml%2Fsources%2Fllama.cpp Improve Debian packaging documentation --- diff --git a/debian/control b/debian/control index ed76d2eb..4602b51f 100644 --- a/debian/control +++ b/debian/control @@ -24,16 +24,17 @@ Description: Inference of large language models in pure C/C++ (shared library) llama.cpp leverages the ggml tensor library in order to run large language models (LLMs) provided in the GGUF file format. +# We only distribute a few very useful tools, with stable CLI options Package: llama-cpp-tools Architecture: any Depends: ${misc:Depends}, ${shlibs:Depends}, libllama0, ggml, curl Description: Inference of large language models in pure C/C++ (tools) - llama-cli: utility tool wrapping most features provided by libllama. + llama-cli: versatile tool wrapping most features provided by libllama. It typically allows one to run one-shot prompts or to "chat" with a large language model. . - llama-quantize: utility tool to "quantize" a large language model + llama-quantize: utility to "quantize" a large language model GGUF file. Quantizing is the process of reducing the precision of the underlying neural-network at aminimal cost to its accuracy. . diff --git a/debian/not-installed b/debian/not-installed index e03f0328..c02e7add 100644 --- a/debian/not-installed +++ b/debian/not-installed @@ -1,6 +1,10 @@ +# Most executables produced are not stable enough to be distributed /usr/bin/llama-* /usr/libexec/*/ggml/llama-* +# Python is not supported +/usr/bin/*.py + +# Test executables are not distributed /usr/bin/test-* -/usr/bin/*.py