From: Mattheus Chediak Date: Thu, 6 Jun 2024 12:17:54 +0000 (-0300) Subject: README minor fixes (#7798) [no ci] X-Git-Tag: upstream/0.0.4488~1389 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=a143c04375828b1f72eb1a326115791b63e79345;p=pkg%2Fggml%2Fsources%2Fllama.cpp README minor fixes (#7798) [no ci] derievatives --> derivatives --- diff --git a/README.md b/README.md index 9d2a59d8..09e8cad3 100644 --- a/README.md +++ b/README.md @@ -598,7 +598,7 @@ Building the program with BLAS support may lead to some performance improvements To obtain the official LLaMA 2 weights please see the Obtaining and using the Facebook LLaMA 2 model section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face. -Note: `convert.py` has been moved to `examples/convert-legacy-llama.py` and shouldn't be used for anything other than `Llama/Llama2/Mistral` models and their derievatives. +Note: `convert.py` has been moved to `examples/convert-legacy-llama.py` and shouldn't be used for anything other than `Llama/Llama2/Mistral` models and their derivatives. It does not support LLaMA 3, you can use `convert-hf-to-gguf.py` with LLaMA 3 downloaded from Hugging Face. ```bash