]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
Add notice about pending change
authorGeorgi Gerganov <redacted>
Tue, 21 Mar 2023 20:57:35 +0000 (22:57 +0200)
committerGitHub <redacted>
Tue, 21 Mar 2023 20:57:35 +0000 (22:57 +0200)
README.md

index d9a4b1babcaa3c93c54f921fcec1058d25f29173..6149032b1b3e706c0c5fd1810b28f5395e8a7708 100644 (file)
--- a/README.md
+++ b/README.md
@@ -5,15 +5,21 @@
 
 Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
 
+---
+
+**TEMPORARY NOTICE:**
+Big code change incoming: https://github.com/ggerganov/llama.cpp/pull/370
+
+Do not merge stuff until we merge this. Probably merge will happen on March 22 ~6:00am UTC
+
+---
+
 **Hot topics:**
 
 - [Added Alpaca support](https://github.com/ggerganov/llama.cpp#instruction-mode-with-alpaca)
 - Cache input prompts for faster initialization: https://github.com/ggerganov/llama.cpp/issues/64
 - Create a `llama.cpp` logo: https://github.com/ggerganov/llama.cpp/issues/105
 
-**TEMPORARY NOTICE:**
-If you're updating to the latest master, you will need to regenerate your model files as the format has changed.
-
 ## Description
 
 The main goal is to run the model using 4-bit quantization on a MacBook