]> git.djapps.eu Git - pkg/ggml/sources/ggml/commit
llama : support RWKV v6 models (llama/8980)
authorMolly Sophia <redacted>
Sun, 1 Sep 2024 14:38:17 +0000 (22:38 +0800)
committerGeorgi Gerganov <redacted>
Sun, 8 Sep 2024 11:43:07 +0000 (14:43 +0300)
commitc032b8c88ad8ef80fd76541fa709aeab250445b4
tree36a65cefa091cf2e6388d432b9579fedac2e3d7c
parentc584042ed3a492cd0fd132f65869075d7165ff8f
llama : support RWKV v6 models (llama/8980)

* convert_hf_to_gguf: Add support for RWKV v6

Signed-off-by: Molly Sophia <redacted>
* Add RWKV tokenization

* Fix build

Signed-off-by: Molly Sophia <redacted>
* Do not use special tokens when matching in RWKV tokenizer

* Fix model loading

* Add (broken) placeholder graph builder for RWKV

* Add workaround for kv cache

* Add logits conversion to rwkv5

* Add rwkv5 layer norms

* Add time mix KVRG & correct merge mistake

* Add remaining time mix parameters

* Add time mix output loading

* Add placeholder llm_build_time_mix

* Fix build

Signed-off-by: Molly Sophia <redacted>
* Load more tensors for rwkv v6

Signed-off-by: Molly Sophia <redacted>
* Fix rwkv tokenizer

Signed-off-by: Molly Sophia <redacted>
* ggml: Add unary operator Exp

Signed-off-by: Molly Sophia <redacted>
* RWKV v6 graph building

Signed-off-by: Molly Sophia <redacted>
* Add ``rescale_every_n_layers`` parameter

Signed-off-by: Molly Sophia <redacted>
* Add ``wkv.head_size`` key for RWKV

so it doesn't reuse Mamba ssm parameters

Signed-off-by: Molly Sophia <redacted>
* Fix offloading layers to CUDA

Signed-off-by: Molly Sophia <redacted>
* Fix parallel inferencing for RWKV

Signed-off-by: Molly Sophia <redacted>
* Remove trailing whitespaces

Signed-off-by: Molly Sophia <redacted>
* build_rwkv: Avoid using inplace operations

Signed-off-by: Molly Sophia <redacted>
* convert_hf_to_gguf: rwkv: Avoid using ``eval``

Signed-off-by: Molly Sophia <redacted>
* convert_hf_to_gguf: rwkv tokenizer: Don't escape sequences manually

Signed-off-by: Molly Sophia <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: compilade <redacted>
* ggml: Add backward computation for unary op ``exp``

Signed-off-by: Molly Sophia <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: compilade <redacted>
* Update convert_hf_to_gguf.py

Co-authored-by: compilade <redacted>
* Use MODEL_ARCH.RWKV6 instead of MODEL_ARCH.RWKV

Signed-off-by: Molly Sophia <redacted>
* build_rwkv6: Simplify graph

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Detect model.type

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Fix tensor loading for 7B/14B models

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Fix group_norm assertion failure with Metal

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Clean up

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Add quantization tensor exclusion

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Use the new advanced batch splits

Signed-off-by: Molly Sophia <redacted>
* Update src/llama.cpp

Co-authored-by: compilade <redacted>
* llama: rwkv6: Use ``ggml_norm`` instead of ``ggml_group_norm``

Co-authored-by: compilade <redacted>
* llama: rwkv6: Apply code style and misc changes

Signed-off-by: Molly Sophia <redacted>
* converter: Use class name ``Rwkv6Model``

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Make use of key ``feed_forward_length``

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Add kv ``time_mix_extra_dim`` and ``time_decay_extra_dim``

Signed-off-by: Molly Sophia <redacted>
* converter: Match ``new_name`` instead of ``name`` for float32 explicit tensors

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Keep ``time_mix_w1/w2`` as F32

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Remove unused nodes

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Apply code format changes

Signed-off-by: Molly Sophia <redacted>
* llama: rwkv6: Add lora for some supported tensors

Currently att.key/receptance/value/gate/output, ffn.receptance/key/value, as well as head.weight

Signed-off-by: Molly Sophia <redacted>
* rwkv : speed-up tokenization using trie

* minor : style + indentation

* llama: rwkv6: Avoid division by zero

Co-authored-by: compilade <redacted>
* ggml: rwkv_wkv: Avoid copying the state

Signed-off-by: Molly Sophia <redacted>
---------

Signed-off-by: Molly Sophia <redacted>
Co-authored-by: Layl Bongers <redacted>
Co-authored-by: compilade <redacted>
Co-authored-by: Georgi Gerganov <redacted>
include/ggml.h
src/ggml.c