]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
fix(nix): remove non-functional llama-cpp cachix cache from flake.nix (#15295)
authorBas Nijholt <redacted>
Wed, 13 Aug 2025 18:21:31 +0000 (11:21 -0700)
committerGitHub <redacted>
Wed, 13 Aug 2025 18:21:31 +0000 (11:21 -0700)
commit1adc9812bd33dc85489bf093528d61c22917d54f
treeb1e5eefb692fa81e3fcd0db754963ce534158430
parentb3e16665e14032d1276f9d0263acef8321b6f518
fix(nix): remove non-functional llama-cpp cachix cache from flake.nix (#15295)

The flake.nix included references to llama-cpp.cachix.org cache with a comment
claiming it's 'Populated by the CI in ggml-org/llama.cpp', but:

1. No visible CI workflow populates this cache
2. The cache is empty for recent builds (tested b6150, etc.)
3. This misleads users into expecting pre-built binaries that don't exist

This change removes the non-functional cache references entirely, leaving only
the working cuda-maintainers cache that actually provides CUDA dependencies.

Users can still manually add the llama-cpp cache if it becomes functional in the future.
flake.nix