]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
llamafile : ppc64le MMA INT8 implementation (#10912)
authoramritahs-ibm <redacted>
Wed, 8 Jan 2025 10:54:19 +0000 (16:24 +0530)
committerGitHub <redacted>
Wed, 8 Jan 2025 10:54:19 +0000 (12:54 +0200)
commit8cef75c743ba13ebbd6d380c531200c768a8b8aa
tree8bc0c23eec5346ece07ee6ed151ec122c0235ea0
parent0d52a69e4bf0d6181beec7853307bdcdeec9905b
llamafile : ppc64le MMA INT8 implementation (#10912)

This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le using MMA
builtins for quantised int8 datatype.

This change results in 10% - 70% improvement
in total speed(ie all tokens/total time), across
various batch sizes.

The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.

Signed-off-by: Amrita H S <redacted>
ggml/src/ggml-cpu/llamafile/sgemm.cpp