Package: ggml
Architecture: any
Depends: ${misc:Depends},
- libggml, libggml-cpu
+ libggml0, libggml-cpu
Description: Tensor library for machine learning (metapackage)
ggml is a pure C/C++ library implementing tensor computations
used by neural networks. It is the basis for llama.cpp (large language models)
installed additionally, typically to support specific computing hardware
such as GPUs.
-Package: libggml-base
+Package: libggml-base0
+Section: libs
Architecture: any
Multi-Arch: same
Pre-Depends: ${misc:Pre-Depends}
The ggml base library provides the backend-independent API
upon which specialized libraries or applications can be built.
-Package: libggml
+Package: libggml0
+Section: libs
Architecture: any
Multi-Arch: same
Pre-Depends: ${misc:Pre-Depends}
Depends: ${misc:Depends}, ${shlibs:Depends},
- libggml-base
+ libggml-base0
Description: Tensor library for machine learning (loader)
The ggml library is a thin high-level layer mostly
responsible for loading the various ggml backends,
Multi-Arch: same
Pre-Depends: ${misc:Pre-Depends}
Depends: ${misc:Depends}, ${shlibs:Depends},
- libggml
+ libggml0
Description: Tensor library for machine learning (CPU backend)
The ggml CPU backend provides computations based solely
on plain CPU, without software or hardware acceleration.
Multi-Arch: same
Pre-Depends: ${misc:Pre-Depends}
Depends: ${misc:Depends}, ${shlibs:Depends},
- libggml,
+ libggml0,
# GGML shows warning that it uses OpenMP if default OpenBLAS (pthread) is used.
libopenblas0-openmp, libopenblas64-0-openmp [amd64 arm64],
Description: Tensor library for machine learning (OpenBLAS backend)
Multi-Arch: same
Pre-Depends: ${misc:Depends}, ${misc:Pre-Depends}
Depends: ${shlibs:Depends},
- libggml
+ libggml0
Description: Tensor library for machine learning (RPC backend)
The ggml RPC backend allows one to distribute computations over
the network on remote ggml backends.
Multi-Arch: same
Pre-Depends: ${misc:Pre-Depends}
Depends: ${misc:Depends}, ${shlibs:Depends},
- libggml,
+ libggml0,
libvulkan1
Description: Tensor library for machine learning (Vulkan backend)
The ggml Vulkan backend provides hardware acceleration of the
Package: ggml-dev
Architecture: any
Depends: ${misc:Depends},
- libggml (= ${binary:Version})
+ libggml0 (= ${binary:Version}), libggml-base0 (= ${binary:Version})
Description: Tensor library for machine learning (development files)
This developments package provides the files required to build
software based on ggml.
--- /dev/null
+From: Mathieu Baudier <mbaudier@argeo.org>
+Date: Sat, 22 Feb 2025 07:52:42 +0100
+Subject: improve-cmake-build
+
+---
+ src/CMakeLists.txt | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt
+index 0002ac1..48687f0 100644
+--- a/src/CMakeLists.txt
++++ b/src/CMakeLists.txt
+@@ -353,5 +353,9 @@ if (BUILD_SHARED_LIBS)
+ set_target_properties(${target} PROPERTIES POSITION_INDEPENDENT_CODE ON)
+ target_compile_definitions(${target} PRIVATE GGML_BUILD)
+ target_compile_definitions(${target} PUBLIC GGML_SHARED)
++ if(DEFINED GGML_BUILD_NUMBER)
++ message(STATUS "Set ${target} shared library version to 0.0.${GGML_BUILD_NUMBER}")
++ set_target_properties(${target} PROPERTIES VERSION 0.0.${GGML_BUILD_NUMBER} SOVERSION 0)
++ endif()
+ endforeach()
+ endif()
# parallelism
DEB_BUILD_OPTIONS ?= parallel=8
+# hardening
+export DEB_BUILD_MAINT_OPTIONS = hardening=+all
+
# ggml specific
ifeq ($(DEB_HOST_ARCH),aarch64)
GGML_CPU_AARCH64 ?= ON
override_dh_auto_configure:
dh_auto_configure -- \
-DCMAKE_SKIP_BUILD_RPATH=ON \
+ -DGGML_ALL_WARNINGS=OFF \
-DCMAKE_LIBRARY_ARCHITECTURE="$(DEB_HOST_MULTIARCH)" \
+ -DCMAKE_PROJECT_ggml_INCLUDE=debian/cmake/debian-ggml.cpp.cmake \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_SHARED_LIBS=ON \
-DGGML_CCACHE=OFF \
install -t $(install_libexec_multiarch)/ggml $$file; \
done
- # whisper.cpp currently requires linking to a CPU backend
- # we therefore provide a link in /usr/lib/*/
- # TODO use alternative
- ln -s --relative $(install_libexec_multiarch)/ggml/libggml-cpu-sandybridge.so $(install_lib_multiarch)/libggml-cpu.so
+ # Provide a symbolic link to the most portable CPU backend
+ # so that it can be used for builds (e.g. whisper.cpp build requires a CPU backend)
+ ln -s --relative $(install_libexec_multiarch)/ggml/libggml-cpu-sandybridge.so $(install_libexec_multiarch)/ggml/libggml-cpu.so