]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commitdiff
fix: apply clang-format to CUDA macros (#16017)
authorBowen Han <redacted>
Tue, 16 Sep 2025 06:59:19 +0000 (23:59 -0700)
committerGitHub <redacted>
Tue, 16 Sep 2025 06:59:19 +0000 (08:59 +0200)
clang-format previously broke long CUDA macros (e.g. __launch_bounds__) into
unreadable line breaks inside template declarations, such as:

  template<int D, int ncols, int nwarps, int VKQ_stride,
           typename KQ_acc_t, bool use_logit_softcap>
      __launch_bounds__(nwarps*ggml_cuda_get_physical_warp_size(), 1)

This change adjusts formatting rules so that CUDA macros remain consistent
and aligned with the surrounding template syntax.

.clang-format

index 117e6986f6f8cabdfa54bb61ff00818665028044..742723fc8f9dfefdf569b768c4b198337e86bbd8 100644 (file)
@@ -22,6 +22,13 @@ AllowShortIfStatementsOnASingleLine: Never
 AllowShortLambdasOnASingleLine: Inline
 AllowShortLoopsOnASingleLine: false
 AlwaysBreakBeforeMultilineStrings: true
+# Treat CUDA keywords/attributes as "attribute macros" and avoid breaking lines inside them
+AttributeMacros:
+  - __host__
+  - __device__
+  - __global__
+  - __forceinline__
+  - __launch_bounds__
 BinPackArguments: true
 BinPackParameters: false # OnePerLine
 BitFieldColonSpacing: Both