]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
readme : update hot topics
authorGeorgi Gerganov <redacted>
Mon, 29 Apr 2024 14:06:19 +0000 (17:06 +0300)
committerGitHub <redacted>
Mon, 29 Apr 2024 14:06:19 +0000 (17:06 +0300)
README.md

index cc667f592f859ad159b2572f583e6eeb23c558d1..a2aa9214ff24122145e335397d1785b82d282aab 100644 (file)
--- a/README.md
+++ b/README.md
@@ -20,7 +20,8 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others)
 
 ### Hot topics
 
-- **MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387**
+- **BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920**
+- MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387
 - Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404
 - Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225
 - Multi-GPU pipeline parallelism support https://github.com/ggerganov/llama.cpp/pull/6017