]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
README minor fixes (#7798) [no ci]
authorMattheus Chediak <redacted>
Thu, 6 Jun 2024 12:17:54 +0000 (09:17 -0300)
committerGitHub <redacted>
Thu, 6 Jun 2024 12:17:54 +0000 (22:17 +1000)
derievatives --> derivatives

README.md

index 9d2a59d89d6f8aac4837b951dcc7ccd87e8beb84..09e8cad31bf6200b3bba837906871f4cb15f6218 100644 (file)
--- a/README.md
+++ b/README.md
@@ -598,7 +598,7 @@ Building the program with BLAS support may lead to some performance improvements
 
 To obtain the official LLaMA 2 weights please see the <a href="#obtaining-and-using-the-facebook-llama-2-model">Obtaining and using the Facebook LLaMA 2 model</a> section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face.
 
-Note: `convert.py` has been moved to `examples/convert-legacy-llama.py` and shouldn't be used for anything other than `Llama/Llama2/Mistral` models and their derievatives.
+Note: `convert.py` has been moved to `examples/convert-legacy-llama.py` and shouldn't be used for anything other than `Llama/Llama2/Mistral` models and their derivatives.
 It does not support LLaMA 3, you can use `convert-hf-to-gguf.py` with LLaMA 3 downloaded from Hugging Face.
 
 ```bash