]> git.djapps.eu Git - pkg/ggml/sources/ggml/commit
AVX BF16 and single scale quant optimizations (llama/10212)
authorEve <redacted>
Fri, 15 Nov 2024 11:47:58 +0000 (11:47 +0000)
committerGeorgi Gerganov <redacted>
Fri, 15 Nov 2024 20:51:53 +0000 (22:51 +0200)
commit73dfb061a7762fb1b0b70ed2ad46e6452fb46720
tree6c7f51355723e61973e1e992d0b279562c0f8470
parent538487872e43030458b23d5dbc87bc30db9cc170
AVX BF16 and single scale quant optimizations (llama/10212)

* use 128 bit loads (i've tried 256->128 to death and its slower)

* double accumulator

* avx bf16 vec dot

* +3% q4_0 inference

* +7% tg +5% pp compared to master

* slower f16c version, kep for reference

* 256b version, also slow. i tried :)

* revert f16

* faster with madd

* split to functions

* Q8_0 and IQ4_NL, 5-7% faster

* fix potential overflow (performance reduced)

* 16 bit add for q4_0 only

* merge
src/ggml-cpu/ggml-cpu-quants.c
src/ggml-cpu/ggml-cpu.c