]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (#19461)
authorMario Limonciello <redacted>
Thu, 12 Feb 2026 08:38:35 +0000 (02:38 -0600)
committerGitHub <redacted>
Thu, 12 Feb 2026 08:38:35 +0000 (09:38 +0100)
commit6845f7f87f4ece1ac13db2e7d0388090bc1c8d3c
treed4eba0b57806cfd8cd95ba0f8f5873c80cb26357
parentfa16e517a3bcc15818d0813993693e1c92ce6c76
Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (#19461)

There is an upstream problem [1] with AMD's LLVM 22 fork and
rocWMMA 2.2.0 causing compilation issues on devices without
native fp16 support (CDNA devices).

The specialized types aren't resolved properly:
```
/opt/rocm/include/rocwmma/internal/mfma_impl.hpp:2549:37: error: ambiguous partial specializations of 'amdgcn_mfma<__half, __half, __half, 16, 16, 16>'
 2549 |             using ARegsT = typename Impl::ARegsT;
```

Add a workaround to explicitly declare the types and cast when
compiling with HIP and ROCWMMA_FATTN [2].  When this is actually
fixed upstream some guards can be used to detect and wrap the
version that has the fix to only apply when necessary.

Link: https://github.com/ROCm/rocm-libraries/issues/4398
Link: https://github.com/ggml-org/llama.cpp/issues/19269
Signed-off-by: Mario Limonciello <redacted>
ggml/src/ggml-cuda/fattn-wmma-f16.cu