]> git.djapps.eu Git - pkg/ggml/sources/whisper.cpp/commit
ggml : IQ4_NL sgemm + Q4_0 AVX optimization (llama/9422)
authorEve <redacted>
Mon, 16 Sep 2024 06:48:24 +0000 (06:48 +0000)
committerGeorgi Gerganov <redacted>
Tue, 24 Sep 2024 16:45:08 +0000 (19:45 +0300)
commit374e9e0c5e45e51aa019afc14fdecbe910f1bf9a
treef88ee2c2ff3c2aa1ff7a4f0f2cbacb42b30a0608
parenta2cb5b4183a7d4b64b8ef3abaa368cfb7f0c991f
ggml : IQ4_NL sgemm + Q4_0 AVX optimization (llama/9422)

* squashed

readd my iq4_nl sgemm PR https://github.com/ggerganov/llama.cpp/pull/8049

have ggml_vec_dot_q4_0 do two blocks per loop for avx

try out f16c ggml_vec_dot_iq4_nl, but it's not really faster. as per https://github.com/ggerganov/llama.cpp/pull/8549 we can calculate several blocks at a time with no issue

* shuffle

* remove f16c iq4_nl as i cant make it faster than before
ggml/src/ggml-quants.c