]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
flake : build llama.cpp on Intel with nix (#2795)
authorVolodymyr Vitvitskyi <redacted>
Sat, 26 Aug 2023 13:25:39 +0000 (14:25 +0100)
committerGitHub <redacted>
Sat, 26 Aug 2023 13:25:39 +0000 (16:25 +0300)
Problem
-------
`nix build` fails with missing `Accelerate.h`.

Changes
-------
- Fix build of the llama.cpp with nix for Intel: add the same SDK frameworks as
for ARM
- Add `quantize` app to the output of nix flake
- Extend nix devShell with llama-python so we can use convertScript

Testing
-------
Testing the steps with nix:
1. `nix build`
Get the model and then
2. `nix develop` and then `python convert.py models/llama-2-7b.ggmlv3.q4_0.bin`
3. `nix run llama.cpp#quantize -- open_llama_7b/ggml-model-f16.gguf ./models/ggml-model-q4_0.bin 2`
4. `nix run llama.cpp#llama -- -m models/ggml-model-q4_0.bin -p "What is nix?" -n 400 --temp 0.8 -e -t 8`

Co-authored-by: Volodymyr Vitvitskyi <redacted>
flake.nix

index 616b902529d46c5a8c104083af8747c7ff8c1c47..d454cedc3714a3ff7929cb6315c7c093ad40ea8d 100644 (file)
--- a/flake.nix
+++ b/flake.nix
               CoreGraphics
               CoreVideo
             ]
+          else if isDarwin then
+            with pkgs.darwin.apple_sdk.frameworks; [
+              Accelerate
+              CoreGraphics
+              CoreVideo
+            ]
           else
             with pkgs; [ openblas ]
         );
           type = "app";
           program = "${self.packages.${system}.default}/bin/llama";
         };
+        apps.quantize = {
+          type = "app";
+          program = "${self.packages.${system}.default}/bin/quantize";
+        };
         apps.default = self.apps.${system}.llama;
         devShells.default = pkgs.mkShell {
+          buildInputs = [ llama-python ];
           packages = nativeBuildInputs ++ osSpecific;
         };
       });