From: Georgi Gerganov Date: Tue, 21 Mar 2023 20:57:35 +0000 (+0200) Subject: Add notice about pending change X-Git-Tag: gguf-v0.4.0~1166 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=3366853e41fcc818222a0271c76b6106179106fb;p=pkg%2Fggml%2Fsources%2Fllama.cpp Add notice about pending change --- diff --git a/README.md b/README.md index d9a4b1ba..6149032b 100644 --- a/README.md +++ b/README.md @@ -5,15 +5,21 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ +--- + +**TEMPORARY NOTICE:** +Big code change incoming: https://github.com/ggerganov/llama.cpp/pull/370 + +Do not merge stuff until we merge this. Probably merge will happen on March 22 ~6:00am UTC + +--- + **Hot topics:** - [Added Alpaca support](https://github.com/ggerganov/llama.cpp#instruction-mode-with-alpaca) - Cache input prompts for faster initialization: https://github.com/ggerganov/llama.cpp/issues/64 - Create a `llama.cpp` logo: https://github.com/ggerganov/llama.cpp/issues/105 -**TEMPORARY NOTICE:** -If you're updating to the latest master, you will need to regenerate your model files as the format has changed. - ## Description The main goal is to run the model using 4-bit quantization on a MacBook