]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
server : support multi-modal context checkpoints (#19849)
authorGeorgi Gerganov <redacted>
Wed, 25 Feb 2026 13:14:27 +0000 (15:14 +0200)
committerGitHub <redacted>
Wed, 25 Feb 2026 13:14:27 +0000 (15:14 +0200)
commitd7d826b3c1fca0c1564a59d92bf6c2c40c8e69fb
tree8524882da556b0f64871b97e51c224218b7b7736
parentc747294b2d70a00a91713abe62fb7890c5893c5c
server : support multi-modal context checkpoints (#19849)

* Modify llama-memory-hybrid-iswa.cpp

* Modify llama-memory-recurrent.cpp

* Modify server-common.cpp

* Modify server-common.h

* Modify server-context.cpp

* Modify server-task.h

* Added comment to llama-memory-hybrid-iswa.cpp

* Remove comment from server-context.cpp

* Stylistic fix server-context.cpp

* Fix an issue when seqrm isn't called in server-context.cpp

* cont : alternative impl

* cont : cleanup

* cont : n_tokens -> int64_t

---------

Co-authored-by: timkhronos <redacted>
src/llama-memory-recurrent.cpp
tools/server/server-common.cpp
tools/server/server-common.h
tools/server/server-context.cpp
tools/server/server-task.h