]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/commit
Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (llama/19461)
authorMario Limonciello <redacted>
Thu, 12 Feb 2026 08:38:35 +0000 (02:38 -0600)
committerGeorgi Gerganov <redacted>
Sun, 15 Feb 2026 19:44:37 +0000 (21:44 +0200)
commit39b5f414a3460b3048a92e60805ed185d06a951b
tree154308d60ba8f16a1718d51c7c638ecb899d70a2
parent304205679c650c3e5977e0b01b7b9bd022336767
Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (llama/19461)

There is an upstream problem [1] with AMD's LLVM 22 fork and
rocWMMA 2.2.0 causing compilation issues on devices without
native fp16 support (CDNA devices).

The specialized types aren't resolved properly:
```
/opt/rocm/include/rocwmma/internal/mfma_impl.hpp:2549:37: error: ambiguous partial specializations of 'amdgcn_mfma<__half, __half, __half, 16, 16, 16>'
 2549 |             using ARegsT = typename Impl::ARegsT;
```

Add a workaround to explicitly declare the types and cast when
compiling with HIP and ROCWMMA_FATTN [2].  When this is actually
fixed upstream some guards can be used to detect and wrap the
version that has the fix to only apply when necessary.

Link: https://github.com/ROCm/rocm-libraries/issues/4398
Link: https://github.com/ggml-org/llama.cpp/issues/19269
Signed-off-by: Mario Limonciello <redacted>
ggml/src/ggml-cuda/fattn-wmma-f16.cu