]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
Use model->gguf_kv for loading the template instead of using the C API. (#10868)
authorDAN™ <redacted>
Tue, 17 Dec 2024 22:24:22 +0000 (17:24 -0500)
committerGitHub <redacted>
Tue, 17 Dec 2024 22:24:22 +0000 (23:24 +0100)
commitd62b532c52e0118323277eaa5f442e11ce6505ed
tree549b262b18ee7188146aa6a6b99c52e3db19133f
parent081b29bd2a3d91e7772e3910ce223dd63b8d7d26
Use model->gguf_kv for loading the template instead of using the C API. (#10868)

* Bump model_template to 16384 bytes to support larger chat templates.

* Use `model->gguf_kv` for efficiency.
src/llama.cpp