From: Georgi Gerganov Date: Sun, 4 Jun 2023 20:38:19 +0000 (+0300) Subject: readme : update hot topics X-Git-Tag: gguf-v0.4.0~693 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=827f5eda91e5b7299848ee2c7179d873bdee0f7b;p=pkg%2Fggml%2Fsources%2Fllama.cpp readme : update hot topics --- diff --git a/README.md b/README.md index 4fc877c6..01f13812 100644 --- a/README.md +++ b/README.md @@ -9,9 +9,11 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ **Hot topics:** -- Quantization formats `Q4` and `Q8` have changed again (19 May) - [(info)](https://github.com/ggerganov/llama.cpp/pull/1508) -- Quantization formats `Q4` and `Q5` have changed - requantize any old models [(info)](https://github.com/ggerganov/llama.cpp/pull/1405) -- [Roadmap May 2023](https://github.com/ggerganov/llama.cpp/discussions/1220) +- GPU support with Metal (Apple Silicon): https://github.com/ggerganov/llama.cpp/pull/1642 +- High-quality 2,3,4,5,6-bit quantization: https://github.com/ggerganov/llama.cpp/pull/1684 +- Multi-GPU support: https://github.com/ggerganov/llama.cpp/pull/1607 +- Training LLaMA models from scratch: https://github.com/ggerganov/llama.cpp/pull/1652 +- CPU threading improvements: https://github.com/ggerganov/llama.cpp/pull/1632
Table of Contents