From: Georgi Gerganov Date: Tue, 7 May 2024 18:43:13 +0000 (+0300) Subject: readme : update hot topics X-Git-Tag: upstream/0.0.4488~1686 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=53d6c52e227dedef347b21e28febcfb9caeecdad;p=pkg%2Fggml%2Fsources%2Fllama.cpp readme : update hot topics --- diff --git a/README.md b/README.md index 885322e6..75fc10a1 100644 --- a/README.md +++ b/README.md @@ -20,7 +20,8 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) ### Hot topics -- **BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920** +- **Initial Flash-Attention support: https://github.com/ggerganov/llama.cpp/pull/5021** +- BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920 - MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387 - Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404 - Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225