From: Georgi Gerganov Date: Mon, 29 Apr 2024 14:06:19 +0000 (+0300) Subject: readme : update hot topics X-Git-Tag: upstream/0.0.4488~1725 X-Git-Url: https://git.djapps.eu/?a=commitdiff_plain;h=24affa7db3c9db148854b0ab4fd63de8bca7d898;p=pkg%2Fggml%2Fsources%2Fllama.cpp readme : update hot topics --- diff --git a/README.md b/README.md index cc667f59..a2aa9214 100644 --- a/README.md +++ b/README.md @@ -20,7 +20,8 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) ### Hot topics -- **MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387** +- **BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920** +- MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387 - Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404 - Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225 - Multi-GPU pipeline parallelism support https://github.com/ggerganov/llama.cpp/pull/6017