]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
ggml : optimize llamafile cpu matrix multiplication for ppc64le (#10156)
authoramritahs-ibm <redacted>
Sat, 9 Nov 2024 07:17:50 +0000 (12:47 +0530)
committerGitHub <redacted>
Sat, 9 Nov 2024 07:17:50 +0000 (09:17 +0200)
commite89213492d3e01705739789733f0f2d250b4c449
tree3a20a36a45a5ab196575546a0f6b792a3bbde076
parent8fc393f246c550d2481e53323a47644a94e8d01f
ggml : optimize llamafile cpu matrix multiplication for ppc64le (#10156)

This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le using MMA
builtins for FP32 datatype.

This change results in a consistent 90%
improvement in input processing time, and 20%
to 80% improvement in output processing time,
across various batch sizes.

The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.

Signed-off-by: Amrita H S <redacted>
ggml/src/CMakeLists.txt
ggml/src/llamafile/sgemm.cpp