]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llamafile : ppc64le MMA implementation for Q4_0. (#12489)
authoramritahs-ibm <redacted>
Thu, 27 Mar 2025 06:51:47 +0000 (12:21 +0530)
committerGitHub <redacted>
Thu, 27 Mar 2025 06:51:47 +0000 (08:51 +0200)
commitc7b43ab60855f752ae79937fb93d561bc30b69a4
tree1b0e5abcd94e94fd91e150fc6156ebb7d5d5864c
parent24feaec05792b972d5ff3e2b12d9237ebd50d1ac
llamafile : ppc64le MMA implementation for Q4_0. (#12489)

This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le ISA using MMA
builtins. This patch handles matrix multiplication
between quantised datatypes, block_q4_0 and
block_q8_0.

This change results in 5% - 50% improvement
in total speed(ie all tokens/total time), across
various batch sizes.

The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.

Signed-off-by: Amrita H S <redacted>
ggml/src/ggml-cpu/llamafile/sgemm.cpp