]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
readme : add Aquila-7B model series to supported models (#2487)
authorldwang <redacted>
Wed, 2 Aug 2023 08:21:11 +0000 (16:21 +0800)
committerGitHub <redacted>
Wed, 2 Aug 2023 08:21:11 +0000 (11:21 +0300)
* support bpe tokenizer in convert

Signed-off-by: ldwang <redacted>
* support bpe tokenizer in convert

Signed-off-by: ldwang <redacted>
* support bpe tokenizer in convert, fix

Signed-off-by: ldwang <redacted>
* Add Aquila-7B models in README.md

Signed-off-by: ldwang <redacted>
* Up Aquila-7B models in README.md

Signed-off-by: ldwang <redacted>
---------

Signed-off-by: ldwang <redacted>
Co-authored-by: ldwang <redacted>
README.md

index 515c80c42ec85b28caa0b0b10bc4e9852194d86f..2ece294b7c94709be0b0daefaf0d9f13f28f72ce 100644 (file)
--- a/README.md
+++ b/README.md
@@ -88,6 +88,7 @@ as the main playground for developing new features for the [ggml](https://github
 - [X] [Pygmalion 7B / Metharme 7B](#using-pygmalion-7b--metharme-7b)
 - [X] [WizardLM](https://github.com/nlpxucan/WizardLM)
 - [X] [Baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B) and its derivations (such as [baichuan-7b-sft](https://huggingface.co/hiyouga/baichuan-7b-sft))
+- [X] [Aquila-7B](https://huggingface.co/BAAI/Aquila-7B) / [AquilaChat-7B](https://huggingface.co/BAAI/AquilaChat-7B)
 
 **Bindings:**
 
@@ -492,6 +493,9 @@ Building the program with BLAS support may lead to some performance improvements
 # obtain the original LLaMA model weights and place them in ./models
 ls ./models
 65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model
+  # [Optional] for models using BPE tokenizers
+  ls ./models
+  65B 30B 13B 7B vocab.json
 
 # install Python dependencies
 python3 -m pip install -r requirements.txt
@@ -499,6 +503,9 @@ python3 -m pip install -r requirements.txt
 # convert the 7B model to ggml FP16 format
 python3 convert.py models/7B/
 
+  # [Optional] for models using BPE tokenizers
+  python convert.py models/7B/ --vocabtype bpe
+
 # quantize the model to 4-bits (using q4_0 method)
 ./quantize ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin q4_0