]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/commit
ggml : optimize llamafile cpu matrix multiplication for ppc64le (llama/10156)
authoramritahs-ibm <redacted>
Sat, 9 Nov 2024 07:17:50 +0000 (12:47 +0530)
committerGeorgi Gerganov <redacted>
Fri, 15 Nov 2024 13:21:04 +0000 (15:21 +0200)
commitb7b38f7d68d0eb3a0668fc779aa055c7f1980489
tree3bb58f7396c6db773d5ed8bd25a3b8d2ac0502f3
parent9f67aab2119830a02fa77407185362d1f022393d
ggml : optimize llamafile cpu matrix multiplication for ppc64le (llama/10156)

This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le using MMA
builtins for FP32 datatype.

This change results in a consistent 90%
improvement in input processing time, and 20%
to 80% improvement in output processing time,
across various batch sizes.

The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.

Signed-off-by: Amrita H S <redacted>
ggml/src/CMakeLists.txt