]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/commit
llama : add Command R Plus support (llama/6491)
authorCarolinabanana <redacted>
Tue, 9 Apr 2024 08:16:13 +0000 (09:16 +0100)
committerGeorgi Gerganov <redacted>
Tue, 9 Apr 2024 17:26:18 +0000 (20:26 +0300)
commit526332873b0a782c9117b15566d9a5ab625e4842
tree4009a16d48ef983b00d413f4d4ea0036f8249033
parent1d2721ca729ec056291834035af63bf4d6cf83ec
llama : add Command R Plus support (llama/6491)

* Add Command R Plus GGUF

* Add Command R Plus GGUF

* Loading works up to LayerNorm2D

* Export new tensors in 1D so they are not quantized.

* Fix embedding layer based on Noeda's example

* Whitespace

* Add line

* Fix unexpected tokens on MPS. Re-add F16 fix. ((Noeda)

* dranger003: Fix block index overflow in CUDA dequantizing.

* Reverted blocked multiplication code as it still has issues and could affect other Llama arches

* export norms as f32

* fix overflow issues during quant and other cleanup

* Type convention

Co-authored-by: Georgi Gerganov <redacted>
* dranger003: Fix more int overflow during quant.

---------

Co-authored-by: S <redacted>
Co-authored-by: S <redacted>
Co-authored-by: slaren <redacted>
Co-authored-by: Georgi Gerganov <redacted>
12 files changed:
ggml-cuda.cu
ggml-cuda/common.cuh
ggml-cuda/convert.cu
ggml-cuda/convert.cuh
ggml-cuda/dequantize.cuh
ggml-cuda/dmmv.cu
ggml-cuda/quantize.cu
ggml-cuda/quantize.cuh
ggml-quants.c
ggml-quants.h
ggml.c
ggml.h