From: Georgi Gerganov Date: Fri, 10 Mar 2023 23:18:10 +0000 (+0200) Subject: Update README.md X-Git-Tag: gguf-v0.4.0~1295 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=6da2df34ee40301d9ecb126968ec4c0c6195f26d;p=pkg%2Fggml%2Fsources%2Fllama.cpp Update README.md --- diff --git a/README.md b/README.md index e16dd074..e7e7cb2a 100644 --- a/README.md +++ b/README.md @@ -139,5 +139,5 @@ python3 convert-pth-to-ggml.py models/7B/ 1 In general, it seems to work, but I think it fails for unicode character support. Hopefully, someone can help with that - I don't know yet how much the quantization affects the quality of the generated text - Probably the token sampling can be improved -- x86 quantization support [not yet ready](https://github.com/ggerganov/ggml/pull/27). Basically, you want to run this on Apple Silicon +- x86 quantization support [not yet ready](https://github.com/ggerganov/ggml/pull/27). Basically, you want to run this on Apple Silicon. For now, on Linux and Windows you can use the F16 `ggml-model-f16.bin` model, but it will be much slower.