]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
readme : update hot topics + model links (#3399)
authorBarfingLemurs <redacted>
Fri, 29 Sep 2023 12:50:35 +0000 (08:50 -0400)
committerGitHub <redacted>
Fri, 29 Sep 2023 12:50:35 +0000 (15:50 +0300)
README.md

index 8cdfb04e0132cdd68075e1518bce80b846a1bade..75b6075f2816da27c416c615ae107b06077b4c55 100644 (file)
--- a/README.md
+++ b/README.md
@@ -11,7 +11,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
 
 ### Hot topics
 
-- Parallel decoding + continuous batching support incoming: [#3228](https://github.com/ggerganov/llama.cpp/pull/3228) \
+- Parallel decoding + continuous batching support added: [#3228](https://github.com/ggerganov/llama.cpp/pull/3228) \
   **Devs should become familiar with the new API**
 - Local Falcon 180B inference on Mac Studio
 
@@ -92,7 +92,8 @@ as the main playground for developing new features for the [ggml](https://github
 - [X] [WizardLM](https://github.com/nlpxucan/WizardLM)
 - [X] [Baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B) and its derivations (such as [baichuan-7b-sft](https://huggingface.co/hiyouga/baichuan-7b-sft))
 - [X] [Aquila-7B](https://huggingface.co/BAAI/Aquila-7B) / [AquilaChat-7B](https://huggingface.co/BAAI/AquilaChat-7B)
-- [X] Mistral AI v0.1
+- [X] [Starcoder models](https://github.com/ggerganov/llama.cpp/pull/3187)
+- [X] [Mistral AI v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
 
 **Bindings:**