]> git.djapps.eu Git - pkg/ggml/sources/ggml/commit
vulkan: For coopmat2 FA, use fp16 accumulators for the final result (llama/19376)
authorJeff Bolz <redacted>
Fri, 6 Feb 2026 08:15:13 +0000 (02:15 -0600)
committerGeorgi Gerganov <redacted>
Sat, 7 Feb 2026 08:37:38 +0000 (10:37 +0200)
commit7f996d670510d7bc1d2c6343e857919a725559af
tree51b6bd674dae30fa36f65be3eb90e14867c9ec3e
parentaba07b5828885fdf1d2723de8aeff671ca5b2efe
vulkan: For coopmat2 FA, use fp16 accumulators for the final result (llama/19376)

The cpu and cuda backends use fp16 for the VKQ accumulator type, this change
does the same for vulkan. This helps particularly with large head sizes which
are very register-limited.

I tried this for the coopmat1 path and it slowed down a bit. I didn't try for
scalar.

I applied the softmax bias that the cuda backend uses to avoid overflow,
although I was not able to reproduce the original bug without it.
src/ggml-vulkan/vulkan-shaders/flash_attn_base.glsl
src/ggml-vulkan/vulkan-shaders/flash_attn_cm2.comp