]> git.djapps.eu Git - pkg/ggml/sources/llama.cpp/commit
Reduce memory usage and allocate enough memory for largest context (#473)
authorGeorgi Gerganov <redacted>
Fri, 24 Mar 2023 21:17:37 +0000 (23:17 +0200)
committerGitHub <redacted>
Fri, 24 Mar 2023 21:17:37 +0000 (23:17 +0200)
commit7a9b6c3a8bdc1cb75fefc826dfaa7331eb63695d
tree339815189c912e9a759a0259613621f6a2adcbf4
parent31572d966531f7d768eb773322016ab78eb6e835
Reduce memory usage and allocate enough memory for largest context (#473)

* Reduce memory usage and allocate enough memory for large contexts

* Simpler scratch buffer usage

* Reenable BLAS for quantized mul_mat

* Fix number of layers in 30B and 65B

* Fix KV cache size for F32
ggml.c
llama.cpp
main.cpp
utils.cpp
utils.h