]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
readme : update hot topics
authorGeorgi Gerganov <redacted>
Sun, 31 Mar 2024 08:56:30 +0000 (11:56 +0300)
committerGitHub <redacted>
Sun, 31 Mar 2024 08:56:30 +0000 (11:56 +0300)
README.md

index d46e075555391071625991e0ee4ef957506726ed..1207763fc6f7bde43176839309d319a42c6b2c42 100644 (file)
--- a/README.md
+++ b/README.md
@@ -18,12 +18,12 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others)
 
 ### Hot topics
 
+- Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404
 - Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225
 - Multi-GPU pipeline parallelizm support https://github.com/ggerganov/llama.cpp/pull/6017
 - Looking for contributions to add Deepseek support: https://github.com/ggerganov/llama.cpp/issues/5981
 - Quantization blind testing: https://github.com/ggerganov/llama.cpp/discussions/5962
 - Initial Mamba support has been added: https://github.com/ggerganov/llama.cpp/pull/5328
-- Support loading sharded model, using `gguf-split` CLI https://github.com/ggerganov/llama.cpp/pull/6187
 
 ----