]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llama : enable chunked fused GDN path (#20340)
authorGeorgi Gerganov <redacted>
Wed, 11 Mar 2026 20:46:40 +0000 (22:46 +0200)
committerGitHub <redacted>
Wed, 11 Mar 2026 20:46:40 +0000 (22:46 +0200)
commitd28961d81e73e32b295d0ad638f3ff14676aeeda
tree4c10ffe5162b737eaae0c8d2ca75d3b131525e23
parentf90bd1dd84b59d75ab7d442228b67ec9a797577c
llama : enable chunked fused GDN path (#20340)

* llama : enable chunked fused GDN path

* models : avoid Q and K repeats when using fused GDA

* cont : fix comment

Co-authored-by: Aman Gupta <redacted>
* cont : fix the fix

Co-authored-by: Aman Gupta <redacted>
* cont : fix

* metal : add GDN kernel (#20361)

* metal : add Metal backend for GGML_OP_GATED_DELTA_NET

Add a fused Metal kernel for the gated delta net recurrence op
(#19504), enabling GPU-accelerated inference for DeltaNet-based
models (Qwen3.5, etc.) on Apple Silicon.

Supports both GDA (scalar gate) and KDA (per-row gate) modes
with head_size 64 and 128. Unsupported configurations (head_size
32, non-contiguous tensors) gracefully fall back to CPU.

Performance: Qwen3.5-0.8B Q4_K_M on M4 Max
  tg128: 170 -> 213 t/s (+25%)

Co-Authored-By: Claude Opus 4.6 <redacted>
* metal : validate contiguity of all input tensors in supports_op

Co-Authored-By: Claude Opus 4.6 <redacted>
* metal : add algorithm equivalence comment for GDA decay path

Co-Authored-By: Claude Opus 4.6 <redacted>
* cont : unslop + optimize

* cont : clean-up

---------

Co-authored-by: Paul Flynn <redacted>
Co-authored-by: Claude Opus 4.6 <redacted>
* CUDA: AR gated delta net improvements (#20391)

* Add FastDiv to gated_delta_net_cuda

* Shard columns across warps

This reduces register pressure (avoids spill for S_v = 128) and gives
the warp-scheduler more CTAs to schedule (thus hiding data-access
latencies).

* Remove unneded include in gated_delta_net.cu

* Improve comments

* Apply code-formating

* Make sharding HIP-compatible

1. Use ggml_cuda_get_physical_warp_size() to determine warp size flexibly
2. Add test with partial warp to test sum reduction on CUDA

* Remove fastdiv_s64, as we can treat neqk1 and rq3 as uint32_t

* Rename variables

* Enable GDN also for prefill, move TODO for chunked_GDN

* Actually remove the TODO from 206890897546bd16602c3b79394fd5ea09ef199f

* Get warp size at runtime

warp_size is not known at compile time in hip host code.

* Don't expose ggml_cuda_get_physical_warp_size on host

---------

Co-authored-by: uvos <redacted>
* llama : refactor llm_build_delta_net_base API

---------

Co-authored-by: Aman Gupta <redacted>
Co-authored-by: Paul Flynn <redacted>
Co-authored-by: Claude Opus 4.6 <redacted>
Co-authored-by: Oliver Simons <redacted>
Co-authored-by: uvos <redacted>
20 files changed:
ggml/include/ggml.h
ggml/src/ggml-cpu/ops.cpp
ggml/src/ggml-cuda/gated_delta_net.cu
ggml/src/ggml-metal/ggml-metal-device.cpp
ggml/src/ggml-metal/ggml-metal-device.h
ggml/src/ggml-metal/ggml-metal-device.m
ggml/src/ggml-metal/ggml-metal-impl.h
ggml/src/ggml-metal/ggml-metal-ops.cpp
ggml/src/ggml-metal/ggml-metal-ops.h
ggml/src/ggml-metal/ggml-metal.metal
src/llama-context.cpp
src/llama-cparams.h
src/llama-impl.h
src/models/delta-net-base.cpp
src/models/kimi-linear.cpp
src/models/models.h
src/models/qwen35.cpp
src/models/qwen35moe.cpp
src/models/qwen3next.cpp
tests/test-backend-ops.cpp