]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
Allow compiling with CUDA without CUDA runtime installed (#7989)
authorUlrich Drepper <redacted>
Tue, 18 Jun 2024 12:00:14 +0000 (14:00 +0200)
committerGitHub <redacted>
Tue, 18 Jun 2024 12:00:14 +0000 (14:00 +0200)
commit61665277afde2add00c0d387acb94ed5feb95917
treedf0cd24e71030cce64a2deba456a04c8cf4acc75
parentb96f9afb0d58b003ac8d1d0c94cd99393a3bc437
Allow compiling with CUDA without CUDA runtime installed (#7989)

On hosts which are not prepared/dedicated to execute code using CUDA
it is still possible to compile llama.cpp with CUDA support by just
installing the development packages.  Missing are the runtime
libraries like /usr/lib64/libcuda.so* and currently the link step
will fail.

The development environment is prepared for such situations.  There
are stub libraries for all the CUDA libraries available in the
$(CUDA_PATH)/lib64/stubs directory.  Adding this directory to the end
of the search path will not change anything for environments which
currently work fine but will enable compiling llama.cpp also in case
the runtime code is not available.
Makefile