]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/commit
llamafile : ppc64le MMA implementation for Q4_0. (llama/12489)
authoramritahs-ibm <redacted>
Thu, 27 Mar 2025 06:51:47 +0000 (12:21 +0530)
committerGeorgi Gerganov <redacted>
Thu, 27 Mar 2025 09:06:03 +0000 (11:06 +0200)
commitfc6d343e76f6408ec462c9c650a6e85aa7189b6a
tree0612233941e0b265177a897c849a9e1fc211af17
parent3199356d3a0ef162385f11ec29f6b18ebcbb41ab
llamafile : ppc64le MMA implementation for Q4_0. (llama/12489)

This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le ISA using MMA
builtins. This patch handles matrix multiplication
between quantised datatypes, block_q4_0 and
block_q8_0.

This change results in 5% - 50% improvement
in total speed(ie all tokens/total time), across
various batch sizes.

The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.

Signed-off-by: Amrita H S <redacted>
ggml/src/ggml-cpu/llamafile/sgemm.cpp