]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
readme : add note that LLaMA 3 is not supported with convert.py (#7065)
authorLyle Dean <redacted>
Sun, 5 May 2024 05:21:46 +0000 (06:21 +0100)
committerGitHub <redacted>
Sun, 5 May 2024 05:21:46 +0000 (08:21 +0300)
README.md

index 2f1317662cf388c99222bbfeae7be75a1663e96f..6951966f67d6761535af4c147e9ef38fefffc935 100644 (file)
--- a/README.md
+++ b/README.md
@@ -712,6 +712,8 @@ Building the program with BLAS support may lead to some performance improvements
 
 To obtain the official LLaMA 2 weights please see the <a href="#obtaining-and-using-the-facebook-llama-2-model">Obtaining and using the Facebook LLaMA 2 model</a> section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face.
 
+Note: `convert.py` does not support LLaMA 3, you can use `convert-hf-to-gguf.py` with LLaMA 3 downloaded from Hugging Face. 
+
 ```bash
 # obtain the official LLaMA model weights and place them in ./models
 ls ./models