]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
convert : force patch_embd weights to F16 or F32 to avoid broken GGUFs (#15367)
authorSigbjørn Skjæret <redacted>
Sun, 17 Aug 2025 12:47:42 +0000 (14:47 +0200)
committerGitHub <redacted>
Sun, 17 Aug 2025 12:47:42 +0000 (14:47 +0200)
commit4d196981d4db79e0105b939eaa7ecd40385b721c
treef34e49094969899e6a2afac0e5c5be61e043758a
parentb143fbc87af0324aa49e16cd91faf3ba8bb22231
convert : force patch_embd weights to F16 or F32 to avoid broken GGUFs (#15367)

* force patch_embd weights to f32

* use MmprojModel base tensor_force_quant instead
convert_hf_to_gguf.py