]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/commit
llamafile : ppc64le MMA INT8 implementation (llama/10912)
authoramritahs-ibm <redacted>
Wed, 8 Jan 2025 10:54:19 +0000 (16:24 +0530)
committerGeorgi Gerganov <redacted>
Tue, 14 Jan 2025 08:38:01 +0000 (10:38 +0200)
commit124eec1664ed9b96c7b13c46c29773ecdb3d8de6
tree6da9dc6a86202d525ce7e63d725f64696008ded4
parentb08c3a88c8ed0c096c041d066b1bf70720858a1a
llamafile : ppc64le MMA INT8 implementation (llama/10912)

This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le using MMA
builtins for quantised int8 datatype.

This change results in 10% - 70% improvement
in total speed(ie all tokens/total time), across
various batch sizes.

The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.

Signed-off-by: Amrita H S <redacted>
ggml/src/ggml-cpu/llamafile/sgemm.cpp