]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
readme : update API changes and hot topics
authorGeorgi Gerganov <redacted>
Wed, 13 Mar 2024 18:33:56 +0000 (20:33 +0200)
committerGitHub <redacted>
Wed, 13 Mar 2024 18:33:56 +0000 (20:33 +0200)
README.md

index 54bf84bec67bb3ebd67ad4d8f09c0440affa796d..80037782fe9d6bcc72b8b82cd4f185fae905d5d3 100644 (file)
--- a/README.md
+++ b/README.md
@@ -10,12 +10,14 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others)
 
 ### Recent API changes
 
+- [2024 Mar 13] Add `llama_synchronize()` + `llama_context_params.n_ubatch` https://github.com/ggerganov/llama.cpp/pull/6017
 - [2024 Mar 8] `llama_kv_cache_seq_rm()` returns a `bool` instead of `void`, and new `llama_n_seq_max()` returns the upper limit of acceptable `seq_id` in batches (relevant when dealing with multiple sequences) https://github.com/ggerganov/llama.cpp/pull/5328
 - [2024 Mar 4] Embeddings API updated https://github.com/ggerganov/llama.cpp/pull/5796
 - [2024 Mar 3] `struct llama_context_params` https://github.com/ggerganov/llama.cpp/pull/5849
 
 ### Hot topics
 
+- Multi-GPU pipeline parallelizm support https://github.com/ggerganov/llama.cpp/pull/6017
 - Looking for contributions to add Deepseek support: https://github.com/ggerganov/llama.cpp/issues/5981
 - Quantization blind testing: https://github.com/ggerganov/llama.cpp/discussions/5962
 - Initial Mamba support has been added: https://github.com/ggerganov/llama.cpp/pull/5328