]> git.djapps.eu Git - pkg/ggml/sources/ggml/commit
llamafile : ppc64le MMA INT8 implementation (llama/10912)
authoramritahs-ibm <redacted>
Wed, 8 Jan 2025 10:54:19 +0000 (16:24 +0530)
committerGeorgi Gerganov <redacted>
Tue, 14 Jan 2025 07:36:36 +0000 (09:36 +0200)
commitf10f3ed00e4dc35d75a5833b542400382eb5fa87
treeb0cff2f53b5783732413452817064236ce879dab
parentd3e586131255da03fe52ded43bf861dc7d2aeb44
llamafile : ppc64le MMA INT8 implementation (llama/10912)

This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le using MMA
builtins for quantised int8 datatype.

This change results in 10% - 70% improvement
in total speed(ie all tokens/total time), across
various batch sizes.

The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.

Signed-off-by: Amrita H S <redacted>
src/ggml-cpu/llamafile/sgemm.cpp