From: Bas Nijholt Date: Wed, 13 Aug 2025 18:21:31 +0000 (-0700) Subject: fix(nix): remove non-functional llama-cpp cachix cache from flake.nix (#15295) X-Git-Tag: upstream/0.0.6164~13 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=1adc9812bd33dc85489bf093528d61c22917d54f;p=pkg%2Fggml%2Fsources%2Fllama.cpp fix(nix): remove non-functional llama-cpp cachix cache from flake.nix (#15295) The flake.nix included references to llama-cpp.cachix.org cache with a comment claiming it's 'Populated by the CI in ggml-org/llama.cpp', but: 1. No visible CI workflow populates this cache 2. The cache is empty for recent builds (tested b6150, etc.) 3. This misleads users into expecting pre-built binaries that don't exist This change removes the non-functional cache references entirely, leaving only the working cuda-maintainers cache that actually provides CUDA dependencies. Users can still manually add the llama-cpp cache if it becomes functional in the future. --- diff --git a/flake.nix b/flake.nix index 0b5edf91..bb02c8e5 100644 --- a/flake.nix +++ b/flake.nix @@ -36,9 +36,6 @@ # ``` # nixConfig = { # extra-substituters = [ - # # Populated by the CI in ggml-org/llama.cpp - # "https://llama-cpp.cachix.org" - # # # A development cache for nixpkgs imported with `config.cudaSupport = true`. # # Populated by https://hercules-ci.com/github/SomeoneSerge/nixpkgs-cuda-ci. # # This lets one skip building e.g. the CUDA-enabled openmpi. @@ -47,10 +44,8 @@ # ]; # # # Verify these are the same keys as published on - # # - https://app.cachix.org/cache/llama-cpp # # - https://app.cachix.org/cache/cuda-maintainers # extra-trusted-public-keys = [ - # "llama-cpp.cachix.org-1:H75X+w83wUKTIPSO1KWy9ADUrzThyGs8P5tmAbkWhQc=" # "cuda-maintainers.cachix.org-1:0dq3bujKpuEPMCX6U4WylrUDZ9JyUG0VpVZa7CNfq5E=" # ]; # };