]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
hexagon: dma optimizations (mostly fixing regressions) (#21137)
authorMax Krasnyansky <redacted>
Sun, 29 Mar 2026 13:40:13 +0000 (06:40 -0700)
committerGitHub <redacted>
Sun, 29 Mar 2026 13:40:13 +0000 (06:40 -0700)
commitf5d1c4179fedf726bec744d3125a55df8d02496a
tree05e1ffbd027ffd4bbf3ed0d8a3f57c8aaf58f43c
parent2405d59cb613f7b9f98ecbc9eb25f8a45188ee06
hexagon: dma optimizations (mostly fixing regressions) (#21137)

* hex-fa: add simple dma cache for Mask

I noticed that we were refetch the mask rows over and over.
This simple cache avoids that.

* hex-dma: unset in-order desc bit which caused signficant perf regression

We don't rely on true in order processing of the DMA descriptors anywhere.
Turns out this mode caused significant regression of around 3-4 TPS during token gen.

* hex-rope: update comment to clarify that we don't need in-order DMA completions
ggml/src/ggml-hexagon/htp/flash-attn-ops.c
ggml/src/ggml-hexagon/htp/hex-dma.h
ggml/src/ggml-hexagon/htp/rope-ops.c