]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
ggml : IQ4_NL sgemm + Q4_0 AVX optimization (#9422)
authorEve <redacted>
Mon, 16 Sep 2024 06:48:24 +0000 (06:48 +0000)
committerGitHub <redacted>
Mon, 16 Sep 2024 06:48:24 +0000 (09:48 +0300)
commit5c3d0f1824714e9a97fc9b06e046eefcb6ecc721
treea5ad9dba854abf86e85545e812a177b890a31b0c
parent0aadac10c7dd704f8285ddf5a63d6f764cb340aa
ggml : IQ4_NL sgemm + Q4_0 AVX optimization (#9422)

* squashed

readd my iq4_nl sgemm PR https://github.com/ggerganov/llama.cpp/pull/8049

have ggml_vec_dot_q4_0 do two blocks per loop for avx

try out f16c ggml_vec_dot_iq4_nl, but it's not really faster. as per https://github.com/ggerganov/llama.cpp/pull/8549 we can calculate several blocks at a time with no issue

* shuffle

* remove f16c iq4_nl as i cant make it faster than before
ggml/src/ggml-quants.c
ggml/src/llamafile/sgemm.cpp