]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
vulkan: Implement grouped query attention in the coopmat2 FA shader (#12559)
authorJeff Bolz <redacted>
Wed, 2 Apr 2025 17:40:32 +0000 (12:40 -0500)
committerGitHub <redacted>
Wed, 2 Apr 2025 17:40:32 +0000 (19:40 +0200)
commitbe0a0f8cae039e2286f757612accebfb8f21b36e
tree12043a01ceaa5fb435e25d61e290aaa9cdbb6b91
parent92e3006bb69dfeb656ccf5c7c1c1efadb03c88c2
vulkan: Implement grouped query attention in the coopmat2 FA shader (#12559)

When adjacent batches of Q share the same batches of K/V, batch them into
the same workgroup. For example, when:

dst(128,32,1,1) = FA(q(128,1,32,1), k(128,16640,8,1), v(128,16640,8,1))

previously we would run 32 workgroups computing 1 result each, now we will
run 8 workgroups computing 4 results each.

This doesn't directly translate to better performance (at least when you have
>=32 SMs), but in a subsequent change I'll enable split_k which will scale much
better with 4x fewer workgroups.
ggml/src/ggml-vulkan/ggml-vulkan.cpp
ggml/src/ggml-vulkan/vulkan-shaders/flash_attn_cm2.comp