]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llama: print memory breakdown on exit (#15860)
authorJohannes Gäßler <redacted>
Wed, 24 Sep 2025 14:53:48 +0000 (16:53 +0200)
committerGitHub <redacted>
Wed, 24 Sep 2025 14:53:48 +0000 (16:53 +0200)
commite789095502b337690c7616db32d7c679a5bd2533
tree63ba97b660a56bb83ba3fede679d684ef0ad2d39
parentf2a789e33490deb483a2694b066b37e45524bb79
llama: print memory breakdown on exit (#15860)

* llama: print memory breakdown on exit
18 files changed:
common/sampling.cpp
ggml/include/ggml-backend.h
ggml/src/ggml-backend.cpp
include/llama.h
src/llama-context.cpp
src/llama-context.h
src/llama-kv-cache-iswa.cpp
src/llama-kv-cache-iswa.h
src/llama-kv-cache.cpp
src/llama-kv-cache.h
src/llama-memory-hybrid.cpp
src/llama-memory-hybrid.h
src/llama-memory-recurrent.cpp
src/llama-memory-recurrent.h
src/llama-memory.h
src/llama-model.cpp
src/llama-model.h
tools/perplexity/perplexity.cpp