From: Mathieu Baudier Date: Tue, 21 Jan 2025 11:10:08 +0000 (+0100) Subject: Improve Debian build based on lintian feedback X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=60899e29af4f5b1137afb864c0aac5afa6c583ad;p=pkg%2Fggml%2Fsources%2Fllama.cpp Improve Debian build based on lintian feedback --- diff --git a/debian/control b/debian/control index aa8e5189..7fbd6928 100644 --- a/debian/control +++ b/debian/control @@ -2,7 +2,7 @@ Source: llama-cpp Section: science Priority: optional Maintainer: Mathieu Baudier -Build-Depends: debhelper-compat (= 13), pkg-config, cmake-data, cmake, cpio, curl, libcurl4-openssl-dev, file, git, +Build-Depends: debhelper-compat (= 13), pkg-config, cmake-data, cmake, cpio, curl, openssl, libcurl4-openssl-dev, file, git, ggml-dev Standards-Version: 4.5.1 Homepage: https://github.com/ggerganov/llama.cpp @@ -11,24 +11,25 @@ Rules-Requires-Root: binary-targets Package: libllama Priority: optional Architecture: any -Depends: ${shlibs:Depends}, +Multi-Arch: same +Pre-Depends: ${misc:Pre-Depends} +Depends: ${misc:Depends}, ${shlibs:Depends}, libggml -Recommends: curl Description: Inference of LLMs in pure C/C++ (shared library) Llama.cpp inference of LLMs in pure C/C++ (shared library). Package: llama-cpp-cli Architecture: any Priority: optional -Depends: ${shlibs:Depends}, - libllama +Depends: ${misc:Depends}, ${shlibs:Depends}, + libllama, curl Description: Inference of LLMs in pure C/C++ (CLI) Llama.cpp inference of LLMs in pure C/C++ (CLI). Package: llama-cpp-server Architecture: any Priority: optional -Depends: ${shlibs:Depends}, +Depends: ${misc:Depends}, ${shlibs:Depends}, libllama, curl, openssl Description: Inference of LLMs in pure C/C++ (CLI) Llama.cpp inference of LLMs in pure C/C++ (CLI). @@ -36,14 +37,15 @@ Description: Inference of LLMs in pure C/C++ (CLI) Package: libllama-dev Architecture: any Priority: optional -Depends: ggml-dev, libllama +Depends: ${misc:Depends}, + ggml-dev, libllama Description: Inference of LLMs in pure C/C++ (development files) Llama.cpp inference of LLMs in pure C/C++ (development files). Package: llama-cpp-dev Architecture: any Priority: optional -Depends: libllama-dev -Description: Inference of LLMs in pure C/C++ (common development files) - Llama.cpp inference of LLMs in pure C/C++ (common development files). - \ No newline at end of file +Depends: ${misc:Depends}, + libllama-dev +Description: Inference of LLMs in pure C/C++ (common static library) + Llama.cpp inference of LLMs in pure C/C++ (common static library). diff --git a/debian/copyright b/debian/copyright index 7c923453..d690de29 100644 --- a/debian/copyright +++ b/debian/copyright @@ -4,7 +4,7 @@ Upstream-Contact: https://github.com/ggerganov/llama.cpp/issues Source: https://github.com/ggerganov/llama.cpp Files: * -Copyright: Copyright (c) 2023-2024 The llama.cpp authors +Copyright: Copyright (c) 2023-2025 The llama.cpp authors License: MIT Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal @@ -12,10 +12,10 @@ License: MIT to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: - + . The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - + . THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE @@ -25,7 +25,7 @@ License: MIT SOFTWARE. Files: debian/* -Copyright: 2024 Mathieu Baudier +Copyright: 2024-2025 Mathieu Baudier License: GPL-2+ This package is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by diff --git a/debian/rules b/debian/rules index 1277119c..6509421b 100755 --- a/debian/rules +++ b/debian/rules @@ -25,9 +25,9 @@ override_dh_auto_configure: -DLLAMA_ALL_WARNINGS=OFF \ -DLLAMA_BUILD_TESTS=OFF \ -DLLAMA_BUILD_SERVER=ON \ + -DLLAMA_USE_CURL=ON \ -DLLAMA_SERVER_SSL=ON \ - # FIXME we disable LLAMA_ALL_WARNINGS so that ggml_get_flags() CMake function do not get called # as it is available deep in GGML and not properly published