]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
fix: apply clang-format to CUDA macros (#16017)
authorBowen Han <redacted>
Tue, 16 Sep 2025 06:59:19 +0000 (23:59 -0700)
committerGitHub <redacted>
Tue, 16 Sep 2025 06:59:19 +0000 (08:59 +0200)
commitf1fbffb5c0b34b2a68febb7da3fd0f8333f1ed4c
treec843223dbad826f46bcfa6be09cd3cbe7169a36e
parent51abc96bdc52ba8cd6ad78dcf12ed9a041d7b442
fix: apply clang-format to CUDA macros (#16017)

clang-format previously broke long CUDA macros (e.g. __launch_bounds__) into
unreadable line breaks inside template declarations, such as:

  template<int D, int ncols, int nwarps, int VKQ_stride,
           typename KQ_acc_t, bool use_logit_softcap>
      __launch_bounds__(nwarps*ggml_cuda_get_physical_warp_size(), 1)

This change adjusts formatting rules so that CUDA macros remain consistent
and aligned with the surrounding template syntax.
.clang-format