]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
AVX BF16 and single scale quant optimizations (#10212)
authorEve <redacted>
Fri, 15 Nov 2024 11:47:58 +0000 (11:47 +0000)
committerGitHub <redacted>
Fri, 15 Nov 2024 11:47:58 +0000 (12:47 +0100)
commit18429220bdb344da1bc7df9bc580c7b41b3cd57b
tree5e5185731dfb55e34d85d3ab5b39cef35763a79f
parentf0204a0ec70d50ca60e07bc0096ec1d6508ab0c7
AVX BF16 and single scale quant optimizations (#10212)

* use 128 bit loads (i've tried 256->128 to death and its slower)

* double accumulator

* avx bf16 vec dot

* +3% q4_0 inference

* +7% tg +5% pp compared to master

* slower f16c version, kep for reference

* 256b version, also slow. i tried :)

* revert f16

* faster with madd

* split to functions

* Q8_0 and IQ4_NL, 5-7% faster

* fix potential overflow (performance reduced)

* 16 bit add for q4_0 only

* merge
ggml/src/ggml-cpu/ggml-cpu-quants.c
ggml/src/ggml-cpu/ggml-cpu.c