]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llama : add Command-R support (#6033)
authorAndrew Canis <redacted>
Fri, 15 Mar 2024 20:41:22 +0000 (16:41 -0400)
committerGitHub <redacted>
Fri, 15 Mar 2024 20:41:22 +0000 (22:41 +0200)
commit12247f4c69a173b9482f68aaa174ec37fc909ccf
tree1c580de91d5d0676e146bb45b9197d88aeb226fd
parent4e9a7f7f7fb6acbddd1462909c8d696e38edbfcc
llama : add Command-R support (#6033)

Information about the Command-R 35B model (128k context) can be found at:
https://huggingface.co/CohereForAI/c4ai-command-r-v01

Based on the llama2 model with a few changes:

1) New hyper parameter to scale output logits (logit_scale)
2) Uses LayerNorm instead of RMSNorm
3) Transfomer layers have a single shared LayerNorm that feeds into both the
   self-attention and FFN layers in parallel. There is no post-attention LayerNorm.
4) No support for Rotary Position Embeddings (RoPE) scaling
5) No biases used

Find GGUF files here:
https://huggingface.co/andrewcanis/c4ai-command-r-v01-GGUF

To convert model to GGUF format yourself:

1) Download Command-R Hugging Face safetensors:
git lfs install
git clone https://huggingface.co/CohereForAI/c4ai-command-r-v01

2) Run:
python3 convert-hf-to-gguf.py --outtype f16 ./c4ai-command-r-v01
README.md
convert-hf-to-gguf.py
gguf-py/gguf/constants.py
gguf-py/gguf/gguf_writer.py
llama.cpp