]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
CUDA: Do not mutate cgraph for fused ADDs (#19566)
authorOliver Simons <redacted>
Fri, 13 Feb 2026 09:37:55 +0000 (10:37 +0100)
committerGitHub <redacted>
Fri, 13 Feb 2026 09:37:55 +0000 (15:07 +0530)
commit43919b7f4f0a4fef1de13c830fbe9a33ce38e483
tree5f299fd9254cc9f5080d38e2a92cd48f5e6b2000
parent423cf0b26fc0b72ff4bb4656a68a607d38b95fe5
CUDA: Do not mutate cgraph for fused ADDs (#19566)

* Do not mutate cgraph for fused ADDs

1. We should try to minimize in-place changes to the incoming
   ggml_cgraph where possible (those should happen in graph_optimize)
2. Modifying in-place leads to an additional, unnecessary graph capture
   step as we store the properties before modifying the graph in-place
   in the cuda-backend

* Assert ggml_tensor is trivially copyable

* Update ggml/src/ggml-cuda/ggml-cuda.cu

Co-authored-by: Aman Gupta <redacted>
---------

Co-authored-by: Aman Gupta <redacted>
ggml/src/ggml-cuda/ggml-cuda.cu