]> git.djapps.eu Git - pkg/ggml/sources/ggml/commit
vulkan: Implement split_k for coopmat2 flash attention. (llama/12627)
authorJeff Bolz <redacted>
Wed, 2 Apr 2025 19:25:08 +0000 (14:25 -0500)
committerGeorgi Gerganov <redacted>
Tue, 8 Apr 2025 08:47:46 +0000 (11:47 +0300)
commit7b4302979a718ea2949af03183572fa02e56b591
tree871568934f39bb1565a7b7802fb5ee49daae5324
parenta6e3af10d9c6460958aa86a884c20769953bbc23
vulkan: Implement split_k for coopmat2 flash attention. (llama/12627)

When using group query attention, we have one workgroup per KV batch and this
can be very few workgroups (e.g. just 8 in some models). Enable split_k to
spread the work across SMs. This helps a lot when the KV cache is large.
src/ggml-vulkan/ggml-vulkan.cpp
src/ggml-vulkan/vulkan-shaders/flash_attn_cm2.comp
src/ggml-vulkan/vulkan-shaders/flash_attn_split_k_reduce.comp [new file with mode: 0644]
src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp
tests/test-backend-ops.cpp