]> git.djapps.eu Git - pkg/ggml/sources/ggml/commit
CUDA: add a fused top-K MoE kernel (llama/16130)
authorAman Gupta <redacted>
Thu, 25 Sep 2025 14:35:05 +0000 (22:35 +0800)
committerGeorgi Gerganov <redacted>
Mon, 29 Sep 2025 09:41:09 +0000 (12:41 +0300)
commit3648cd3e793628beb3f03ebecfa3353f88dfafcf
tree8f6d9902113ba0d7d7c103aae4a21471d10cd164
parent78eea5534d0fa207671d4aeeb1dd99ea2fc4cf0a
CUDA: add a fused top-K MoE kernel (llama/16130)

* CUDA: add a fused top-K MoE kernel

This kernel does the following:
1. softmax over the logits per token [n_experts, n_tokens]
2. argmax reduce over the top-k (n_experts_used) logits
3. write weights + ids to global memory

It is intended as fusion of softmax->top-k->get_rows pipeline for MoE models

* Refactor into ggml_cuda_should_use_topk_moe

* Review: Use better coalescing pattern, use WARP_SIZE, store logits into registers before

* Review: format + micro-optimizations

* Fix bug: fix tie breakers

* Add optional norm + clean-up code

* Use smem for final write

* Add bounds check

* Use better memory pattern for writeback
src/ggml-cuda/ggml-cuda.cu
src/ggml-cuda/topk-moe.cu [new file with mode: 0644]
src/ggml-cuda/topk-moe.cuh [new file with mode: 0644]
tests/test-backend-ops.cpp