]> git.djapps.eu Git - pkg/ggml/sources/ggml/commit
CUDA: Improve flash decoding kernel GPU occupancy for BS=1 case (llama/12183)
authorGaurav Garg <redacted>
Wed, 19 Mar 2025 19:52:06 +0000 (01:22 +0530)
committerGeorgi Gerganov <redacted>
Thu, 27 Mar 2025 07:35:24 +0000 (09:35 +0200)
commit807726590102f2df4e93ef8f5e100d28f757994b
tree46660a8852910d3da86eca00a72b3df51a2b8274
parent57d281541618f56bec47c7b4e3070e474cc21846
CUDA: Improve flash decoding kernel GPU occupancy for BS=1 case (llama/12183)

- Find out active blocks per SM using cudaOccupancyMaxActiveBlocksPerMultiprocessor API. Use this value to determine the optimal parallel_blocks value.
- Prefer vector flash attention kernels over MMA kernel for BS=1

Fixes Issue: #12182
---------

Co-authored-by: Johannes Gäßler <redacted>
12 files changed:
src/ggml-cuda/fattn-common.cuh
src/ggml-cuda/fattn-mma-f16.cuh
src/ggml-cuda/fattn-tile-f16.cu
src/ggml-cuda/fattn-tile-f32.cu
src/ggml-cuda/fattn-vec-f16.cuh
src/ggml-cuda/fattn-vec-f32.cuh
src/ggml-cuda/fattn-wmma-f16.cu
src/ggml-cuda/fattn.cu
src/ggml-cuda/ggml-cuda.cu
src/ggml-cuda/vendors/hip.h
src/ggml-cuda/vendors/musa.h
tests/test-backend-ops.cpp