]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
convert : fix autoawq gemma (#6704)
authorZheng.Deng <redacted>
Tue, 16 Apr 2024 20:51:07 +0000 (04:51 +0800)
committerGitHub <redacted>
Tue, 16 Apr 2024 20:51:07 +0000 (23:51 +0300)
commitfacb8b56f8fd3bb10a693bf0943ae9d69d0828ef
tree169ecebed53b9047b7f234e673846fb1a84ab229
parent532c1737a14bb4b99747e6f460874947df37e450
convert : fix autoawq gemma (#6704)

* fix autoawq quantized gemma model convert error

using autoawq to quantize gemma model will include a lm_head.weight tensor in model-00001-of-00002.safetensors. it result in this situation that convert-hf-to-gguf.py can't map lm_head.weight. skip loading this tensor could prevent this error.

* change code to full string match and print necessary message

change code to full string match and print a short message to inform users that lm_head.weight has been skipped.

---------

Co-authored-by: Zheng.Deng <redacted>
convert-hf-to-gguf.py