]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
ggml-cpu : add check for ARM MATMUL_INT8/i8mm support (#15922)
authorDaniel Bevenius <redacted>
Thu, 11 Sep 2025 13:39:12 +0000 (15:39 +0200)
committerGitHub <redacted>
Thu, 11 Sep 2025 13:39:12 +0000 (14:39 +0100)
commit24a6734daf6932ff29ba8c1ff0245c51d76f783e
tree4ded99732ce646f1bf28c2cb8f414af9e5cf0909
parent2b3efea9a4d91216850856fbb77075db26f6a6eb
ggml-cpu : add check for ARM MATMUL_INT8/i8mm support (#15922)

This commit adds a check for GGML_MACHINE_SUPPORTS_i8mm when enabling
MATMUL_INT8 features, ensuring that i8mm intrinsics are only used when
the target hardware actually supports them.

The motivation for this is to fix ggml CI build failures where the
feature detection correctly identifies that i8mm is not supported,
adding the +noi8mm flag, but MATMUL_INT8 preprocessor definitions are
still enabled, causing the compiler to attempt to use vmmlaq_s32
intrinsics without i8mm support.

Refs: https://github.com/ggml-org/ggml/actions/runs/17525174120/job/49909199499
ggml/src/ggml-cpu/CMakeLists.txt