]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llama: add support for QRWKV6 model architecture (#11001)
authorMolly Sophia <redacted>
Fri, 10 Jan 2025 01:58:08 +0000 (09:58 +0800)
committerGitHub <redacted>
Fri, 10 Jan 2025 01:58:08 +0000 (09:58 +0800)
commitee7136c6d1e0ba7633294dad137b1573048031ec
tree7aaf56a126b7ab6da25b789b041a8c6d5298ce5b
parentc6860cc7346c90219475e4467bb8a288e0df975c
llama: add support for QRWKV6 model architecture (#11001)

llama: add support for QRWKV6 model architecture (#11001)

* WIP: Add support for RWKV6Qwen2

Signed-off-by: Molly Sophia <redacted>
* RWKV: Some graph simplification

Signed-off-by: Molly Sophia <redacted>
* Add support for RWKV6Qwen2 with cpu and cuda GLA

Signed-off-by: Molly Sophia <redacted>
* RWKV6[QWEN2]: Concat lerp weights together to reduce cpu overhead

Signed-off-by: Molly Sophia <redacted>
* Fix some typos

Signed-off-by: Molly Sophia <redacted>
* code format changes

Signed-off-by: Molly Sophia <redacted>
* Fix wkv test & add gla test

Signed-off-by: Molly Sophia <redacted>
* Fix cuda warning

Signed-off-by: Molly Sophia <redacted>
* Update README.md

Signed-off-by: Molly Sophia <redacted>
* Update ggml/src/ggml-cuda/gla.cu

Co-authored-by: Georgi Gerganov <redacted>
* Fix fused lerp weights loading with RWKV6

Signed-off-by: Molly Sophia <redacted>
* better sanity check skipping for QRWKV6 in llama-quant

thanks @compilade

Signed-off-by: Molly Sophia <redacted>
Co-authored-by: compilade <redacted>
---------

Signed-off-by: Molly Sophia <redacted>
Co-authored-by: Georgi Gerganov <redacted>
Co-authored-by: compilade <redacted>
23 files changed:
README.md
convert_hf_to_gguf.py
ggml/include/ggml.h
ggml/src/ggml-cpu/ggml-cpu.c
ggml/src/ggml-cuda/ggml-cuda.cu
ggml/src/ggml-cuda/gla.cu [new file with mode: 0644]
ggml/src/ggml-cuda/gla.cuh [new file with mode: 0644]
ggml/src/ggml-cuda/wkv6.cu
ggml/src/ggml-sycl/wkv6.cpp
ggml/src/ggml-vulkan/ggml-vulkan.cpp
ggml/src/ggml.c
gguf-py/gguf/constants.py
gguf-py/gguf/gguf_writer.py
gguf-py/gguf/tensor_mapping.py
src/llama-arch.cpp
src/llama-arch.h
src/llama-hparams.cpp
src/llama-hparams.h
src/llama-model.cpp
src/llama-model.h
src/llama-quant.cpp
src/llama.cpp
tests/test-backend-ops.cpp