]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
fix(nix): remove non-functional llama-cpp cachix cache from flake.nix (#15295)
authorBas Nijholt <redacted>
Wed, 13 Aug 2025 18:21:31 +0000 (11:21 -0700)
committerGitHub <redacted>
Wed, 13 Aug 2025 18:21:31 +0000 (11:21 -0700)
The flake.nix included references to llama-cpp.cachix.org cache with a comment
claiming it's 'Populated by the CI in ggml-org/llama.cpp', but:

1. No visible CI workflow populates this cache
2. The cache is empty for recent builds (tested b6150, etc.)
3. This misleads users into expecting pre-built binaries that don't exist

This change removes the non-functional cache references entirely, leaving only
the working cuda-maintainers cache that actually provides CUDA dependencies.

Users can still manually add the llama-cpp cache if it becomes functional in the future.

flake.nix

index 0b5edf911fd066be7a53f0e83636c9d2cf97efb5..bb02c8e52f9ad78e2b66eda8091e5c9356361798 100644 (file)
--- a/flake.nix
+++ b/flake.nix
@@ -36,9 +36,6 @@
   # ```
   # nixConfig = {
   #   extra-substituters = [
-  #     # Populated by the CI in ggml-org/llama.cpp
-  #     "https://llama-cpp.cachix.org"
-  #
   #     # A development cache for nixpkgs imported with `config.cudaSupport = true`.
   #     # Populated by https://hercules-ci.com/github/SomeoneSerge/nixpkgs-cuda-ci.
   #     # This lets one skip building e.g. the CUDA-enabled openmpi.
   #   ];
   #
   #   # Verify these are the same keys as published on
-  #   # - https://app.cachix.org/cache/llama-cpp
   #   # - https://app.cachix.org/cache/cuda-maintainers
   #   extra-trusted-public-keys = [
-  #     "llama-cpp.cachix.org-1:H75X+w83wUKTIPSO1KWy9ADUrzThyGs8P5tmAbkWhQc="
   #     "cuda-maintainers.cachix.org-1:0dq3bujKpuEPMCX6U4WylrUDZ9JyUG0VpVZa7CNfq5E="
   #   ];
   # };