]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
Improve CUDA graph capture (#19754)
authorGaurav Garg <redacted>
Sat, 21 Feb 2026 09:39:36 +0000 (15:09 +0530)
committerGitHub <redacted>
Sat, 21 Feb 2026 09:39:36 +0000 (15:09 +0530)
commita0c91e8f9f69c11bbdb1111af20537e206f0866f
tree19620687bd4af5f45569b80489094ff1a04393ee
parent07968d53e4c4421e227ef816d9732cdd5abfc78d
Improve CUDA graph capture (#19754)

* Improve CUDA graph capture

Currently, CUDA graphs are eagerly enabled on the first call to ggml_backend_cuda_graph_compute. If the graph properties keep changing (4+ consecutive updates), the graph is permanently disabled. This is suboptimal because:

- The first call always incurs CUDA graph capture overhead even if the graph is unstable
- Once permanently disabled, CUDA graphs never re-enable even after the graph stabilizes (e.g., switching from prompt processing to decode)

The new approach delays CUDA graph activation until warmup completes: the same cgraph must be called at least twice with matching properties before CUDA graph capture begins. This avoids wasted capture overhead on volatile graphs and allows graphs to become eligible once they stabilize.
This also fixes issues such as https://github.com/ggml-org/llama.cpp/discussions/19708

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Johannes Gäßler <redacted>
* Remove EM dashes

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Aman Gupta <redacted>
---------

Co-authored-by: Johannes Gäßler <redacted>
Co-authored-by: Aman Gupta <redacted>
ggml/src/ggml-cuda/common.cuh
ggml/src/ggml-cuda/ggml-cuda.cu