From: Georgi Gerganov Date: Wed, 29 Mar 2023 16:38:31 +0000 (+0300) Subject: readme : fix typos X-Git-Tag: gguf-v0.4.0~1062 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=b467702b87461543c75013207e9adc6d20dcc01d;p=pkg%2Fggml%2Fsources%2Fllama.cpp readme : fix typos --- diff --git a/README.md b/README.md index c2323f40..e30452ee 100644 --- a/README.md +++ b/README.md @@ -229,13 +229,15 @@ cadaver, cauliflower, cabbage (vegetable), catalpa (tree) and Cailleach. ### Using [GPT4All](https://github.com/nomic-ai/gpt4all) - Obtain the `gpt4all-lora-quantized.bin` model -- It is distributed in the old `ggml` format which is not obsoleted. So you have to convert it to the new format using [./convert-gpt4all-to-ggml.py](./convert-gpt4all-to-ggml.py): +- It is distributed in the old `ggml` format which is now obsoleted +- You have to convert it to the new format using [./convert-gpt4all-to-ggml.py](./convert-gpt4all-to-ggml.py): ```bash python3 convert-gpt4all-to-ggml.py models/gpt4all-7B/gpt4all-lora-quantized.bin ./models/tokenizer.model ``` -- You can now use the newly generated `gpt4all-lora-quantized.bin` model in exactly the same way as all other models. The original model is stored in the same folder with a suffix `.orig` +- You can now use the newly generated `gpt4all-lora-quantized.bin` model in exactly the same way as all other models +- The original model is saved in the same folder with a suffix `.orig` ### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data